AWS for Industries

Accelerate Model-Based development with BTC EmbeddedPlatform on AWS

Introduction

Automotive developers use Model-Based Development (MBD) tools to create software using visual models, then generate and execute tests to verify the software against functional, safety and other kinds of requirements. Testing is time-consuming, and developers need to execute a growing number of tests depending on the features developed. This blog will cover how AWS Partner BTC Embedded Systems collaborated with AWS to shorten testing time and accelerate model-based development.

BTC Embedded Systems, founded in Oldenburg, Germany in 1999, provides intelligent and automated test solutions used by automakers and suppliers worldwide. The BTC EmbeddedPlatform (EP) caters to the increased testing needs in the automotive industry by offering diverse testing methods, such as requirements-based testing, back-to-back testing, and formal verification. These tests verify high quality and help support compliance with automotive standards such as ISO 26262. Now, BTC EP is available to run on the AWS Cloud, which brings several benefits to automotive developers;

1. Improved quality through early testing: On premises, developers often defer resource-intensive tests due to limited resources, pushing these tests to later development stages. In contrast, as shown in this blog, developers can use AWS to provision instances on Amazon Elastic Compute Cloud (Amazon EC2) with the required CPU and memory on demand. This lets developers regularly run tests as part of their daily activities which can help detect quality issues earlier in the software development cycle.

2. Enhanced development agility: As shown in Figures 1 and 2, Tables 1 and 2, there is more than a 50 percent acceleration in stimuli vector generation time by increasing the number of threads used. This reduces the time developers have to wait after each code change.

3. Time and cost optimization: AWS offers a variety of Amazon EC2 instance types, so developers can choose the most suitable server configurations (number of CPUs, memory size, and storage) for their specific needs in terms of costs and desired runtime. This flexibility is a considerable advantage over on-premises options, where developers are typically restricted to a limited range of computing specifications.

Stimuli vector generation with BTC EP on AWS

Now that we have outlined the advantages of using the BTC EP on AWS, we turn to a specific application: stimuli vector generation for back-to-back testing.

Automotive developers often start development with floating-point models, which offer flexibility, and then generate floating-point or fixed-point code, from the models using an auto-code generator. BTC EP allows fully automated back-to-back testing between the model and the generated code, including the generation of time-series test inputs values, also referred to as stimuli vectors.

For this blog, we generated floating-point and fixed-point codes from a floating-point model using TargetLink provided by another AWS partner, dSPACE, and measured the stimuli vector generation time for both fixed-point code and floating-point code. The fixed-point code had 685 properties, while the floating-point code had 855 properties. “Properties” here refer to coverage properties derived from structural coverages such as Modified Condition/Decision Coverage (MC/DC). BTC EP generates stimuli vectors that cover those properties by model-checking technology. Model- checking is a powerful technology that often provides high coverage up to 100% or mathematically proves unreachable properties. However, model-checking is time-consuming and compute resource-intensive. We show in the results below how Amazon EC2 helps customers secure the compute resources and reduce the stimuli-vector generation time.

How to determine the required resources to run BTC EP on AWS?
Stimuli vector generation time varies with the allocated resources. Depending on the model, a larger memory, a higher IO, or a larger number of CPUs might be required. Developers on premises can have difficulties with allocating the required resources and, as a consequence, end up with long wait times before they can finish their tests and uncover any quality issues. While the exact required specifications differ depending on the model and the desired performance, we use five instance types in this blog for demonstration purposes.

M7i, R7i, and R7iz instances feature 4th Generation Intel Xeon Scalable processors with the best performance among comparable Intel processors in the cloud. M7i instances offer a balanced 4:1 memory to vCPU ratio. R7i instances are memory-optimized and offer an 8:1 memory to vCPU ratio. R7iz instances are optimized for memory and high CPU with a 3.9 GHz sustained all-core turbo frequency. C6i instances are compute-optimized and feature a 2:1 memory to vCPU ratio.

Customers can start with general-purpose compute instances such as the m7i.16xlarge, observe the performance, and adjust the instance type as needed.

Results
To illustrate the reduced runtime that automotive developers can expect by running BTC EP on AWS, we ran stimuli vector generation for fixed-point code using the r7iz.32xlarge instance with a varying number of threads, from 8 vCPUs to 62 vCPUs. We use 8 vCPUs here to simulate the performance in on-premises setups, where obtaining access to a sufficient number of vCPUs can be challenging. Throughout this blog, we set one thread per vCPU in the CPU options settings of all the instances used.

As illustrated in Figure 1 below, we see a 58 percent acceleration in runtime when using 62 threads compared to 8 threads. Combined with the ability to run tests in parallel on different Amazon EC2 instances, this result indicates that test combinations that took days for automotive software developers to complete on premises can now run in hours or potentially even minutes on AWS.

Figure 1: Test execution time vs number of threads

Next, we ran stimuli vector generation for fixed-point code across three additional instance variations. There was a consistent runtime acceleration of more than 40 percent across all four instance types. This indicates that automotive developers will have a wide range of Amazon EC2 instances from which to select to run their tests, depending on their requirements.

Figure 2: Instance VariationFigure 2: Instance Variation

Finally, we ran stimuli vector generation for floating-point code across two instance types and noticed more than 52 percent acceleration when increasing the number of threads. This is important because floating-point code is gaining wider adoption in the automotive industry.

Stimuli vector generation for floating-point code (seconds)

Table 1Table 1

Table 2Table 2

The data above also shows how customers can balance costs and performance. Depending on the development schedule, a generation time of 204 seconds using M7i.16xlarge might be sufficient. However, in other cases, larger instances such as r7i.48xlarge can be used to further reduce the generation time.

Conclusion

In this blog we have discussed how running BTC EP on AWS can help improve automotive developers’ productivity and integrate quality tests with daily development activities, helping automotive companies achieve higher product quality and development agility. We also highlighted how customers can opt to save costs by choosing the instance type depending on the development needs.

If you are interested in learning more about the results above or in running BTC EP on AWS, please get in touch with BTC Embedded Systems.

Acknowledgments

We thank dSPACE Japan for facilitating this work and for providing the floating model from which the codes used in this blog were generated.

TAGS:
Ray Endo

Ray Endo

Ray Endo is a Senior Solutions Architect at Amazon Web Services (AWS). Ray Joined AWS in 2022 and has been working on cloud-based automotive software development.

Junwoo Lee

Junwoo Lee

Junwoo Lee is a Senior Sales Engineer at BTC Japan. He joined BTC in 2018. Currently, he provides sales and technical support for the automotive industry.

Leif Driebold

Leif Driebold

Leif Driebold is Senior Product Manager of EmbeddedPlatform at BTC Japan. In his role he works closely with customers and other stakeholders for requirement capturing. Based on these requirements a future roadmap is defined which helps in achieving the customer goals.

Taichi Ando

Taichi Ando

Taichi Ando is Representative Director of BTC Japan. He was in charge of Sales Engineering department 15 years at BTC Japan and started the new role from April 2024.

Tatsuo Azeyanagi

Tatsuo Azeyanagi

Tatsuo Azeyanagi is a Senior Solutions Architect at Amazon Web Services (AWS). He is working with customers to accelerate software and machine learning development by leveraging compute services of AWS Cloud. He enjoys traveling abroad and cycling.