LLMs at FAANG (and how we're using them)
It's obvious that AI/LLMs are changing the world. Lots of people have a doomsday view about it. Tbh, sometimes I do as well, but I try not to think about it too much. Do I think my job will be replaced by AI? Hopefully lmfaooo 🤣 Maybe then I'll finally open that cafe I've always wanted to.
Anyways, the last few months, I’ve been deep-diving into LLMs. I’ve not only been building products around it, but also using it everyday in my own workstreams. Here are some notes/thoughts I have about it so far, specifically in big tech.
Using LLMs internally
There's been a lot of push internally to use LLMs to optimize our productivity. Leadership has been pushing it a lot because it’s the new trend, but I think it’s a trend that’s likely here to stay forever. There are probably a dozen new AI-related projects being released internally everyday for other Amazonians to use. From what it looks like, LLMs are not only focused on optimizing software engineering, but every single role.
Example: A chatbot that will configure complex settings that are used for certain functionality on Amazon.com (previously would have to be done manually by a product manager).
Actually, my last project at work was exactly this. I had to create an AI agent that other Amazonians could interact with to configure specific experiences at checkout. It was a pretty cool/rewarding experience. I learned a lot about LLMs:
Advantages/disadvantages between different models (balancing cost vs. intelligence).
Check this really good website out if you want to see differences: https://artificialanalysis.ai/
Prompt engineering – I learned a ton from reading prompts used by other applications like v0 or Cursor: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
Different agent frameworks, I specifically used LangGraph.js.
RAG and agent tools.
"Vibe coding" is also a new trend internally. Amazon Kiro (competitor to Cursor/Windsurf) was launched and some folks internally have begun using it. Both are actually an amazing tools. I'd say it has boosted my productivity 10x. Basic projects that would take me maybe a week before can now be finished from idea to code review in a day.
Example: Implement a new page on an internal dashboard that displays the diff between two configurations and highlights the specific parameters that have been changed
However, I can't say this is true for all projects. Amazon is huge, and the systems are extremely complex. There's not enough compute/tokens yet to allow LLMs to process the sheer amount of volume/complexity that our systems have. Some observations:
Writing unit tests yourself are a thing of the past. And don’t even get me started on writing React tests by hand.
Writing annoying code has become so much easier and faster
LLMs work amazing for building user experiences. Frontend engineering has now become more product focused. I like how Partiful calls their engineers “Product Engineers”.
Although LLMs work great for frontend, backend is still tough.
IDEs like Cursor and Kiro don’t integrate well for languages like Java.
One codebase my team work’s with has hundreds of classes. LLM agents are not able to understand it within their context window.
Lots of big tech have internal alternatives for public tech (for example, we have our own DevOps software that is not available to the general public). To stop this gap, we have to provide the LLM with additional context so it can understand how to use these things. With Model Context Protocol (MCP) this is getting better but we are still a while away.
Instead of using agentic AI for backend work, I usually just provide the LLM with example code, and ask it to do it one-off. LLMs are able to do this pretty well.
Example: Give this database, write a data access object class with these functions: X, Y, Z. The database is in X format.
The 2x vs 10x engineer
One thing I noticed though is that LLMs are absolutely cooking the juniors. I only have 2 years of professional experience underneath my belt, but it’s extremely obvious when an intern publishes a code review that was written entirely by AI. Is it a bad thing that it was written with AI? NO. At this point, I think everyone should be using LLMs to help them write code. However, I still think that even if you are not writing the code, you should know exactly what it does, and why the LLM wrote it that way. This is so that:
You learn
You are able to debug issues if they come up in the future
You are able to write maintainable, clean, and understandable code
If you are having the LLM write code and submitting it without any verification, then you are not only doing yourself a disservice by forgoing that learning opportunity, but also contributing to making the codebase shittier than it needs to be.
Whether you’re a junior, mid level, or senior, I still think it’s valuable to read over the code that the LLM is producing, because that will differentiate the 2x and 10x engineer. The 10x engineer will be able to leverage LLM most effectively by stopping it from producing code that will require intensive fixing in the future. Furthermore, they will be able to understand exactly what the LLM needs to do to tackle a certain task. The 2x engineer will need to go back and forth with the LLM dozens of times to get to the finish line. The 10x engineer will only need to once or twice.
Will LLMs replace engineers?
Kinda. I do think LLMs will reduce headcount. I think anyone opposing that is just in denial. However, I don’t think it’ll replace software engineering as a role entirely. Eventually, I think there no longer will be specialized engineering (like backend engineer, mobile engineer, frontend engineer, etc). I think all software engineering will become “product engineering”, where engineers are prompting LLMs to write code. Engineers then instead become chaperones, who ensure that the LLM is outputting quality code that is maintainable and scaleable.
At least in big tech, I think humans will still be the final ones make decisions for system architecture and design. This is because LLMs cannot be trusted to understand the complexity of these systems (and cannot fathom the additional use cases they may need to support in the future without in-depth context). In terms of business, this is the actual high stakes part of software engineering. Because bad system design and architecture can stunt product development, or even worse, hurt customer experience in the event of unmaintainable codebases.