AI Engineer (Remote) at Tungsten Technologies: A Deep Dive into the Role, Skills, and Career Opportunity

 

AI Engineer (Agentic AI & LLM Specialist)

AI Engineer (Agentic AI & LLM Specialist)

Company: Tungsten Technologies Location: Remote (India) Job Type: Permanent, Full-time Compensation: ₹70,000 - ₹80,000 per month


 Hello, Future Builder.

We are Tungsten Technologies, and we are looking for more than just a coder. We are looking for an AI Engineer who is ready to step off the sidelines and build the engines that will drive the next generation of software.

You’ve likely seen the hype surrounding AI. You’ve played with the models. But you know that the real magic doesn't happen in a Jupyter notebook—it happens when you take those models, give them agency, wrap them in robust engineering, and deploy them to solve actual, messy, real-world problems.

If you are obsessed with Agentic AI, if you dream in RAG architectures, and if you are comfortable bridging the gap between heavy Python backends and sleek TypeScript frontends, we want to meet you.


 Who We Are

At Tungsten Technologies, we operate at the bleeding edge of what is possible with Large Language Models (LLMs). We aren't just wrapping ChatGPT in a different UI; we are building complex, autonomous systems that can think, plan, and execute tasks.

We believe that the future of work is remote, flexible, and outcome-driven. We don't care about micromanagement or clock-watching. We care about shipping code that works, systems that scale, and products that delight our users. We are a team of builders, tinkers, and problem-solvers who support each other in navigating the rapidly changing landscape of Artificial Intelligence.


 The Mission: Why We Need You

We are scaling our AI/ML team and need a hands-on engineer to help us transition from "prototype" to "production."

Your primary mission will be to design and build Agentic AI workflows. These aren't simple chatbots that answer questions. These are autonomous agents that need to understand intent, retrieve specific context (RAG), make decisions, and interact with external APIs to complete tasks.

You will sit right at the intersection of Research (figuring out the best prompting strategy) and Engineering (making sure the API response time is under 200ms).


 What You Will Actually Do (Day-to-Day)

This is a hands-on role. You won’t be managing a team of interns; you will be writing code, debugging context windows, and architecting solutions. Here is what a typical week might look like:

1. Architecting Agentic Workflows

  • Design Autonomous Agents: You will build systems where LLMs act as "brains" that can plan and execute multi-step workflows. This involves defining tools the AI can use and implementing safety guardrails.

  • Orchestration: You will write the logic that chains different AI calls together. If the AI needs to search the web, summarize a PDF, and then email a user, you will build the pipeline that makes that happen seamlessly.

2. Mastering RAG (Retrieval-Augmented Generation)

  • Context Engineering: LLMs are only as good as the data you feed them. You will design advanced RAG pipelines that retrieve the exact right chunk of information at the right time.

  • Vector Database Management: You will work with vector stores (like Pinecone, Milvus, or Weaviate) to index data efficiently.

  • Optimizing Retrieval: It’s not just about getting data; it’s about ranking it. You will implement re-ranking strategies to ensure the AI isn't hallucinating.

3. Full-Stack AI Integration

  • The Python Core: You will spend much of your time in Python, utilizing libraries like LangChain, LlamaIndex, or raw OpenAI/Anthropic SDKs to build the backend logic.

  • The TypeScript Bridge: AI doesn't live in a vacuum. You will use TypeScript/JavaScript to ensure your Python services talk perfectly to our front-end applications. Whether it’s handling streaming responses or managing WebSocket connections for real-time AI interaction, you need to be comfortable in the JS ecosystem.

4. Production & Deployment

  • Cloud Engineering: You won't just write the code; you’ll ship it. You will deploy your AI applications on cloud platforms (AWS, GCP, or Azure), managing Docker containers and serverless functions.

  • Monitoring: How do we know if the AI is failing? You will set up monitoring to track token usage, latency, and error rates.


 The "Ideal Candidate" Persona

We are looking for a specific mindset. Resumes are great, but how you think matters more.

  • You represent the "New" Full Stack: You understand that being an AI Engineer isn't just about training models—it's about gluing APIs together intelligently. You are as comfortable with a REST API as you are with a Prompt Template.

  • You are a Tinkerer: When a new model drops (like GPT-4o or Claude 3.5), you are the first to test it. You read the papers, you check the Reddit threads, and you know the difference between "Zero-shot" and "Few-shot" prompting.

  • You value "Done" over "Perfect": In the AI world, things change weekly. You prefer shipping a working V1 and iterating fast rather than spending months architecting the perfect theoretical system.

  • You are a Communication Pro: Since we are fully remote, you know that clear writing is a superpower. You can explain why a prompt isn't working to a non-technical stakeholder without using jargon.


 Essential Qualifications

To succeed in this role, you need to check these boxes. We are looking for depth in these specific areas:

1. The Core Tech Stack

  • Python Proficiency: You write clean, modular, and typed Python code. You know your way around FastAPI or Flask.

  • TypeScript/JavaScript: You aren't afraid of the frontend. You can read and write TS/JS to debug integration issues or build server-side logic in Node.js/Next.js if needed.

2. AI & LLM Expertise

  • Agentic AI Experience: You have built or contributed to projects involving autonomous agents (using tools like AutoGPT, BabyAGI, or custom LangGraph implementations).

  • RAG Mastery: You understand the nuances of chunking strategies, embedding models, and vector search.

  • Prompt Engineering: You treat English as a programming language. You know how to structure prompts to get consistent JSON outputs and how to prevent prompt injection.

3. Engineering Fundamentals

  • Cloud Deployment: Experience with Docker, CI/CD pipelines, and cloud hosting (AWS/GCP/Azure).

  • Git/Version Control: Standard pull request workflows, code reviews, and branching strategies.


 Nice-to-Haves (Bonus Points)

These are not mandatory, but if you have them, make sure to mention them!

  • Experience with LangChain or LangGraph.

  • Knowledge of local LLMs (Ollama, Llama 3) and fine-tuning.

  • Experience with Vercel AI SDK.

  • A public GitHub repository showing off an AI project you built.


 Why Join Tungsten Technologies?

We know you have options. Here is why this role is the right career move for you:

1. Work on the Frontier You won't be maintaining legacy code from 2015. You will be working with technology that was likely released last month. This is a massive opportunity to future-proof your career and become a subject matter expert in the most in-demand field in tech.

2. True Remote Freedom We are a Remote-First company. We trust you to manage your time. As long as you are attending the necessary syncs and delivering your work, where you work is up to you.

  • Benefit: Work from home (or a cafe, or a co-working space).

  • Benefit: Zero commute time.

3. Competitive Compensation We offer a monthly salary of ₹70,000 - ₹80,000 (approx. ₹8.4L - ₹9.6L LPA). We believe in paying people fairly for their skills and impact.

4. A Culture of Learning We provide Paid Sick Time and support a healthy work-life balance. We also encourage learning. If you need to spend a day researching a new vector database to solve a problem, that is considered "work," not "wasted time."


 How to Apply

Does this sound like you? If yes, we’d love to chat.

Step 1: The Application Please submit your resume/CV. But more importantly, answer the following questions in your application. We prioritize these answers over generic cover letters:

  1. Do you have specific experience with RAG, LLMs, and Agentic AI workflows? (Please describe a project briefly).

  2. How many years of relevant coding experience do you have?

  3. Are you comfortable working with both Python and TypeScript?

Step 2: The Interview Process We respect your time. Our process is streamlined:

  1. Screening: A quick check of your profile and answers.

  2. Technical Discussion: A conversation with a lead engineer. No whiteboard reversing binary trees—we will discuss real AI architecture, your past projects, and how you would solve a current problem we are facing.

  3. Culture Fit: A chat to ensure our values align.

  4. Offer: We move fast.  



 Frequently Asked Questions 

Q: Can I work from anywhere in India?  

Ans: Yes! As long as you have a stable internet connection and can overlap with our core working hours for meetings, your location within India does not matter.

Q: Do I need a degree?  

Ans: We care about skills, not pedigree. If you have a degree, great. If you are self-taught but have a GitHub full of impressive AI agents, even better.

Q: What is the start date?

Ans: We are looking to fill this position immediately.


Ready to build the future? If you are excited about the intersection of LLMs, scalable systems, and real-world AI applications, apply today. Let’s build something intelligent together.

Tungsten Technologies Remote | Full-Time | AI Engineering

Post a Comment

0 Comments