Over half our code changes are AI-assisted. Here’s what that means
Artificial Intelligence (AI) and Large Language Models (LLMs) are not "magic." They don't take the place of deep product thinking, rail domain knowledge, or responsible engineering. We see them as a significant enhancement to our work.
If used correctly, they make it easier to deliver software, help us learn faster, and give our teams more time to work on the challenging parts, like designing and improving a product that really helps rail operators plan, optimize, and stay in sync.
We are also able to measure this.
Over the past 28 days, AI agents have contributed to the addition and deletion of 52.88% of our lines of code in our railcube applications.
In other words, AI now speeds up a big part of our engineering work at railcube, but our people are still in charge, review, and are responsible for it.
What the 52.88% really means
When we say "52.88% of lines of code added and deleted by agents," we mean that AI agents played a big role in making those changes. This usually includes:
-
Writing implementation code for a well-defined engineering task
-
Suggesting changes to cut down on duplication or make the code easier to read
-
Making tests and checks for edge cases based on patterns that are already there
-
Helping with migrations, boilerplate, and wiring that needs to be done over and over again
-
Suggesting small, safe changes to the codebase
What it doesn't mean:
-
That we ship code without checking it
-
That AI dictates our roadmap
-
That we 'let the model decide'
-
That decisions about domains are made by other people
In practice, AI agents help the railcube team spend less time on mechanical tasks and more time on decisions that really need people, like making trade-offs, usability, domain nuance, and correctness when there are real operational limits.
Other places we use LLMs and automation
Building a good product involves more than just engineering. Communication, interpretation, and consistency are the other big time wasters. That's exactly where LLMs can help, but only if you use them carefully.
Writing documentation based on comments and cards in Jira
Documentation often fails for boring reasons: it takes a long time, it's easy to put off, and details get lost between when a ticket is made and when it is released.
We hire agents to write railcube documentation based on Jira cards, acceptance criteria, and important discussion threads. This is helpful for us:
-
Get the intent while it's still fresh
-
Make sure that all of your documents have the same structure.
-
Make it easier for developers and products to do their jobs.
-
Change "tribal knowledge" into knowledge that can be searched
Our coworkers still check the output, especially for tone and accuracy. The agent's job is to help us get a good first draft quickly.
Agent analysis can help make support tickets better
Quality of support is a feature of the product. A well-organized ticket makes it easier to find patterns, speeds up resolution, and cuts down on back-and-forth.
We use agents to look at incoming tickets and suggest ways to make them better, such as
-
Steps to reproduce or context are missing
-
Likely category, area affected, or severity signs
-
Questions that could help clear things up
-
Issues from the past that could be relevant
This makes our support department faster and more reliable, and it makes it more likely that the first response will be helpful.
Helping teams improve their refinement by comparing story cards to our Definition of Ready
Weak input leads to costly output. If a story isn't clear, the team will have to do more work, wait longer, and deal with surprises later.
We have agents check story cards against our Definition of Ready and point out any gaps, such as
-
Not clear what the user value is
-
Missing criteria for acceptance
-
Assumptions or dependencies that aren't stated
-
Outcomes that can't be measured
-
There aren't enough examples, edge cases, or limits.
This helps make refinement talks healthier. It sets a common standard and helps teams ask the right questions sooner.
Why this fits with our mission
Rail operations need accuracy, dependability, and coordination between moving parts. Our railcube customers have real problems to deal with, like not having enough resources, having to change their plans all the time, and always having to stay in sync with the people and systems around them.
Our goal is to make planning easier and help rail companies use their resources more effectively. There are two ways that LLMs help us do that.
First, they improve how we build. When you spend less time on repetitive tasks, you can spend more time on what customers want: better user experience (UX), clearer workflows, more reliable systems, and faster iteration on what matters.
Second, they improve how we learn. Better documentation, better tickets, and better refinement inputs make feedback loops tighter. That means there will be fewer misunderstandings and a product that changes with clearer intent.
In a world where AI is now part of the engineering toolbox, we make our delivery more professional.
What we're keeping an eye on
A grounded adoption means we also talk about the bad things.
The main risks we keep an eye on are
-
LLMs can give you output that seems right but is flat-out wrong. We reduce risk by doing reviews and tests and making clear who is responsible.
-
We take great care with private information and set strict rules about how it can be shared with automated systems.
-
If you accept everything, faster code generation can make systems messy. We fight this with standards, the discipline to refactor, and making readability a top priority.
-
Quality goes down when teams stop thinking. We see AI as a partner, not a crutch, and we teach people to question what it makes.
What 's next?
The most important thing is that we always connect AI use to customer outcomes. You can count on the railcube team to keep investing in things that make things easier and better, and to stay skeptical about things that add risk without a clear benefit.
We’re building in the middle of this shift, with measurable results and clear accountability. That's one of the ways we make railcube better.


