LLM Agent Researcher
- תל אביב
- משרה קבועה
- משרה מלאה
- Research and prototype LLM agent workflows for:
- Dependency graph analysis (e.g., Python, Go, Docker)
- Security patch recommendations and upgrades
- Automated recompilation and validation steps
- Design prompt chains and agent flows using LangChain or LangGraph
- Build validation loops using agents to assess agent-generated patches
- Run rapid experiments in notebooks or sandboxed environments - productionization is not your concern
- If the agent needs tools (e.g., to inspect Docker image layers), you should be able to:
- Write a basic utility yourself
- If complex, clearly specify what's needed for an Engineer to build it for you
- Strong understanding of software dependencies and vulnerability management
- Awareness of LLM evaluation techniques, hallucination mitigation strategies, and model tuning methods.
- Comfortable working in Jupyter, LangChain, LangGraph, or similar environments
- Familiarity with agent design patterns such as ReAct, Toolformer, LangGraph, or AutoGen multi-agent swarms.
- Fluent in experimenting and iterating - treats notebooks as a lab
- Enough scripting experience (Python preferred) to glue tools together
- Curious, independent, and not afraid to explore unconventional ideas
- Experience with context engineering - designing the inputs an LLM needs to reason accurately and efficiently (e.g., graph traversals, filtered SBOM diffs, CVE summaries).
- Skilled in tools engineering - building Python/Go utilities that agents can invoke (e.g., for image inspection, diff generation, or static analysis).
- Comfort architecting prompt+tool+validation loops with traceable state and measurable outputs.
- You don't need to be worried about DevOps, deployment, or fully productionizing - there will be help with that!
- This is not a frontend/backend/fullstack role. It is agent-centric R&D.
Mploy