In addition to doing our own experiments, learning from others experiences is one of the best ways to deeply understand a subject, especially in new fields: you see how it works in different contexts, their tradeoffs, and how others are experimenting with it. In other words, you find things you don’t know that you don’t know.
The topic of the moment is indubitably Artificial Intelligence and LLMs, and even though we’ve been closely working with clients that use AI as part of their products and as coding assistants every day, we also wanted to know what else the communities from all around the globe were doing with AI, their challenges, concerns, approaches and techniques. That’s why Codeminer42 was present at QCon AI New York 2025, and I’m here to tell you about it and everything that moved gears in my brain during the conference!
The next generation of AI products
The opening keynote was presented by Hilary Mason, titled “The next generation of AI products”, which made me really eager to watch it. It started with some really important facts about the AI at the current moment that we can’t let ourselves forget:
- We’re in a fundamental moment of change caused by a technology that is as hyped as it is not well understood
- Whether your business uses AI or not, its risk landscape will be affected by it
So what does it mean for the AI-powered apps? What has the industry tried until now? What succeeded and what isn’t there yet?
At this point, you probably already have at least one frustrating experience interacting with some product that forced you to interact with an AI-powered chat in order to use it. I don’t mean tools where the AI is the product itself, like ChatGPT, Gemini or Claude, but products that replaced actual GUIs, with buttons, menus and icons, by making you, the user, exchange messages with a chat assistant backed by an LLM.
That begs the question: how will the next generation of AI products be experienced? And the best answer we have is: we don’t know yet! The right UX for applications that explore the possibilites provided by AIs is still in research, we’re still experimenting and measuring, but at least we already know that it isn’t through a chat interface.
Hilary also covered other related subjects beyond UX, like quality and cost metrics of AI products, and how the use of AI impacts architecture decisions. Fundamentally, this talk made me wonder about what we need to rethink when it comes to products and software development.
It’s still (almost) all about data
I will confess something: as open-minded as I was when I decided to attend the conference, I really expected that it wasn’t primarily focused on coding assistants. I wanted to know much more than that: what other products are benefiting from AI capabilities? What are the trade-offs? How is all of that used at scale? And my expectations weren’t frustrated at all.
As it happened to the businesses during the early data science era, the AI-powered businesses finally caught up to the fact that the quality of their products is directly influenced by the quality of the data that backs them, enabling techniques like context engineering and memory management. A good portion of the talks were about what makes it all possible: data!
Ranging from topics like data streaming, Apache Flink, Kafka, context engineering, context pipeline, and RAG, it was really rewarding to see talks that got to the core challenges when dealing with AIs at scale, reminding us that we can’t ignore the importance of data.
We can’t forget all we learned about architecture
As much as AI coding assistants try their best to help us develop good software (which is a topic I will talk about in a second), there’s still something that can’t (and shouldn’t!) be avoided: architectural and software design reasoning!
It started to ring a bell during the talk “From Copy-Paste to Composition: Building Agents Like Real Software”, by Jake Mannix, when he said:
We’re building agents like it’s 1975. No interfaces. No encapsulation. Ship it.
What he means by that is we write agent prompts today the same way we wrote BASIC programs in 1975: there’s no reuse, no structure, no wrapping, no mapping between domains, and a lot of copy-paste.
Have you ever wondered about the fact that, when consuming an MCP, your prompts are forced to use the same terms used by the MCP provider? What if my domain gives a different name for something? What if I want to use a default fixed value for some tool parameter instead of extracting it from the prompt? What if I want only a subset of the tools provided by an MCP and reliably prevent my agent from calling the ones I didn’t choose? Well, Jake did! And for that, he proposes a novel idea: virtual tools.
Virtual tools would allow the authors to control the tool names, their descriptions, the parameters they accept and default values. This change enables MCP consumers to create a mapping layer between an MCP server and their agents, and MCP providers to iterate with a lower risk of breaking agents.
Jake also spoke about ways to tackle what is known as the lethal trifecta for AI agents: private data, untrusted content, and external communication. The gist of this concept is that if your software exposes AI to any combination of two of these, that’s manageable, but if you have all three together, you’re confirmed to be prone to data exfiltration. Due to the eagerness of businesses and developers to add AI to their projects, they forget that problems like these have been well-known and researched for a long time. We can’t forget what we learned. The talk suggests the use of taint checking for agents, adding labels to transmitted data and policies to tool calls based on these labels.
Human curiosity and ambition can lead to unintended consequences
The closing keynote was presented by Tracy Bannon, “Agents, Architecture, & Amnesia: Becoming AI-Native Without Losing Our Minds”. I can’t put into words how much I enjoyed this talk, and not just because Tracy wore a Disney’s Fantasia sorcerer’s hat during the talk. The presentation raised attention for the importance of architectural thinking and governance more than ever.
Tracy began the talk by bringing to light the elephant in the room during the current AI era, saying:
Human curiosity and ambition can lead to unintended consequences
The AI autonomy continuum goes from using AI-assisted tools to mission-level agent autonomy with a high level of independence, which isn’t a problem per se, but doesn’t it feel like we’re going too fast? The talk mentions that speed isn’t the problem, it’s the symptom; the problem is amnesia:
Amnesia is what happens when we rush past the architectural thinking
Architectural amnesia is being driven by several things:
- Productivity theater: it seems like, since AI supposedly makes you more productive, you have to produce more, or at least look like it
- Cognitive load: too much change and novelty, not enough time to keep up
- Tool-led thinking: tools first, architecture second, and limited by the tools
- Decision compression: rushed decision-making and tradeoff validation
It all leads to us leaving agents operating without controls, leading to damage and technical debt. But how can we avoid it? We go back to fundamentals. We can’t forget what we learned.
The talk suggests:
- Double down on core disciplines, like tradeoff analysis, value measuring rather than velocity measuring and debt management
- Architectural decision records (ADR), write down the decisions, their reasoning, alternatives considered and implications
- Increase governance maturity as the complexity of the task and the autonomy of the agent increase: implement agent identity, boundaries, monitoring, validation and accountability mechanisms. Starting with identity will lead to rest to happen. This topic resonates especially with Jake’s point about taint checking
You know how to do this. Don’t let AI make you forget.
Crime has more tools than ever with AI
I want to take a moment to talk about Shuman Ghosemajumder’s talk, “Deepfakes, Disinformation, and AI Content Are Taking Over the Internet”. Even though this one wasn’t about any technology techniques, it raises awareness of one of the most important topics when it comes to this AI era: disinformation and crime.
Shuman mentions that cybercriminals are constantly finding ways to automate their activities, and since AI is just automation, generative AI becomes the ultimate cybercriminal tool. Let’s consider the 3 stages of disinformation automation:
- Possibility: 1 person can create 1 piece of convincing fake content
- Democratization: 1 million people can create 1 million pieces of convincing fake content
- Automation: 1 person can create 1 million pieces of convincing fake content
We’re closer to stage 3 than ever, and that’s something we can’t forget nor let other people forget, and organizations have 3 key areas in which they can act on that:
- Infrastructure security
- Business model fraud protection
- Communication security
Just to be clear, I don’t mean AI = crime! I mean that as much as we can do good with AI, criminals can also do crimes with it, and we can’t pretend it’s not the case ever; keeping our eyes open is important.
The state of coding agents
The talks about coding agents and assistants confirmed some things that I’ve been talking to and hearing from colleagues, it was validating to see that the community and other companies agree that the current generation of coding agents:
- Cannot be given full autonomy
- Needs extensive and effective human review
- Performs better when working in small steps interleaved with human validation
- They are not one-size-fits-all: each tool will work better for a different target public
- Performs better when instructed through agents.md, rules and skills
- Can increase security threats and suboptimal design
Still, with all these limitations, the community and the companies find ways to make it net positive through tools and techniques.
From generating API SDKs for multiple programming languages to using techniques that break the AI-assisted software development life cycle (SDLC) into very well-defined steps with specific rules like RIPER5. It’s noticeable that it’s a moment of huge experimentation and finding ways to make AI reliably useful during the SDLC as we learn more about its capabilities and limitations, and we can’t wait to know what’s ahead!
Wrapping up
A huge side-effect of conferences on me is how inspired I usually come back from them, especially when they’re about novel subjects, and with QCon AI, it wasn’t any different! It gave me a lot to think about and to experiment, and we can’t wait to be present and meet the community at the next conferences!
We want to work with you. Check out our Services page!

