Blog
Models with a Purpose: Tools to Save Time and Catch Errors

Models with a Purpose: Tools to Save Time and Catch Errors

Brandon Henry
October 3, 2024

As a software engineer, I've always struggled with making "models." Whether it's SysML in Cameo, AADL in OSATE, or Rolled-My-Own-DL because I didn't like either of those, it just seemed like a massive waste of time. We could all spend vast amounts of time meticulously writing and rewriting the details of how our software should operate, how it should talk to users or other software, the order in which it needs to perform tasks, how it should handle edge cases... and for what? After all that work, we have to just go and write the actual software, which has no real link to the model anyway! We just switch back and forth, looking at our model, then our code, then back at the model to make sure they're synced up, because if we're all completely honest with ourselves, it's all in our heads. What's the point?

Alright, maybe I'm just a little bitter about making models. They actually have a lot going for them. After all, they do tell us how different software programs should talk to each other, when and in what order tasks should be performed, and sometimes they drill all the way down to what bits should be flipped and how the computer should know when to flip them. The reason models exist is because all of that information is important, and depending on the responsibilities of the software, it can be critical or even cost lives if missed. The model plays an absolutely crucial role as a source of truth for developers figuring out what their code needs to do and for system designers trying to piece together countless requirements. Without these models, engineers must read and deeply understand every piece of code, and that quickly becomes too burdensome even for systems that run on a single computer, let alone systems as deep and extensive as airplanes or hospitals. We need models because they help us understand the pieces of our software puzzle.

model_code_link_quote

Unfortunately, I was overly optimistic when I said models were a "source of truth." Models and the code they're supposed to describe change over time as new requirements come to light or as software bugs arise. The model and the code it describes aren't actually linked together. For example, a system designer makes a model and passes it to an engineer who writes the code to match the model, but then the engineer needs to make changes based on real-world requirements, and those changes may not make it back into the model. Or, let's say a customer adds new requirements to the model, and that information isn't relayed back to the developer. Perhaps a manager reviews the model, thinks something is a typo, and "fixes it" but doesn't tell anybody. Suddenly, the model and the code no longer match, and even minute discrepancies can have tremendous consequences when a software component goes into a system and doesn't do what's expected. If you have a "source of truth" model and a code base that isn't tested against the model, the code becomes the source of truth.

Remember when I said models are used to assemble all kinds of critical infrastructure, from airplanes to health care? You may think the lack of a concrete tie between the model and the code is an issue when system engineers are trying to design those systems, and you're absolutely right. It's also an issue for Jack and Jill, who are trying to create an ordering system for their new coffee shop, or Small Services, LLC, which has been attempting to integrate its new IoT product with the connection APIs provided by Big Brand. Whether systems are made from two chunks of software or a thousand, those systems are put together based on how the software components talk to each other. We use models to understand how that communication happens and what messages and data the programs send back and forth. When the model is wrong, it blinds us to what the code is really doing.

Engineers have been mulling over this issue for years. The general consensus on how to ensure each software component does the right thing is... *drum roll, please* ... lots and lots of tedious, handwritten tests. And listen, this isn't a criticism! That's just the best way to do things when you're designing one-off software pieces. To do something better, you'd have to put a lot of time and energy into creating a whole ecosystem behind the tests, factoring in all the understanding baked into the model about messages, control flow, and operational states. That data would then be turned into a capable piece of code used to test the component. Write that extensive (and expensive!) code for one piece of software, then turn around and do it for the next one? That's not a feasible solution for 99% of companies trying to get a quick turnaround on their products with an ever-increasing demand for speed and security.

Let's imagine what the ideal test tool would look like, then. To start, we want it to work with all of our in-house software components and any third-party programs we've bought, too. We want to ensure that all the code we're using does what we think it's supposed to do based on our models.

We need to feed our model to get all the information about what data goes in and out of each piece of software, what tasks the code should perform and when, and how incoming messages from other programs are handled.

And we would want to test the actual program. As in, run the program and see if it does what it should do. It would be nice to automatically inject a bunch of math into the program and do formal verification to show how it can never logically be wrong, but that's a whole different ball game, and it's pretty hard to beat testing the real thing, at least for now.

But what�s the takeaway here? Well, since you asked, the team at Tangram Flex has put tons of time and energy into something that brings a real purpose to modeling systems, and it's nearly ready for you to try out for yourself. With a natural flow through the design process that just makes sense, it saves me time and helps prevent mistakes. Enough spoilers, though. You can read more about it soon enough. See you then.

FEATURED BLog
Software Interface Design

Tangram Pro™ Leverages Palantir’s FedStart Infrastructure

Tangram Pro Designer has partnered with Palantir to achieve IL5 compliance through FedStart, enhancing its capabilities for U.S. government agencies with seamless integration, interoperability, and deployment readiness in secure environments.
Tangram Flex
November 1, 2024
Tangram Flex

Greg Muhlner Joins Tangram Flex as Chief Growth Officer

Bringing Strategic Vision and Leadership to Accelerate Growth and Innovation
Tangram Flex
October 16, 2024