AWS for Industries
Executing Container Orchestration with Eclipse BlueChi on the RedHat In-Vehicle Operating System in the AWS Cloud – Part 1
Introduction
Software is redefining the automotive experience. It is through software that Advanced Driver Assistance Systems (ADAS) can help improve vehicle safety. It is through software that In-Vehicle Infotainment (IVI) systems can deliver audio and video infotainment. And it is through software that automotive manufacturers can offer more reliable predictive maintenance to large fleets.
Because of the heterogeneous mix of hardware and software running on different vehicles, containers offer an ideal approach for delivering standardized software to vehicles. By running software in containers, workloads can be deployed more easily across various vehicle environments. However, popular operating systems and container orchestration platforms designed for data centers are not ideal for vehicle environments, which require higher standards for reliability and safety.
With that in mind, Red Hat has extended their leading enterprise Linux platform to the automotive industry through the Red Hat In-Vehicle Operating System. This OS is designed to meet continuous safety certification requirements from the automotive industry. To streamline the orchestration of containers in such environments, Red Hat has introduced, and contributed to the Eclipse Foundation Eclipse BlueChiT™ for Red Hat In-Vehicle Operating System – a systemd service controller for running containerized applications in highly-regulated environments such as the automotive industry.
Developing and testing new workloads on top of these new systems requires a compute environment that is compatible with and has parity with actual vehicle compute environments.
In this four-part post, we show how the AWS Cloud can provide the ideal development and test environment for executing container orchestration with Eclipse BlueChi on the Red Hat In-Vehicle OS. Part 1 describes how Eclipse BlueChi is installed onto a Red Hat In-Vehicle OS environment running on AWS. Part 2 shows using BlueChi to control the deployment of containers, and we will provide an example from the recent OpenADKit demo at CES 2024. In part 3, we provide an example of an ADAS workload running on BlueChi on top of compute instance types using AWS Graviton Processors and Amazon EC2 DL2q Instances. In part 4, we describe how Zenoh, a pub/sub/query protocol, provides the ideal communication protocol between BlueChi launched containers.
Solution Overview
BlueChi is built around three components:
bluechi-controller
service, running on the primary node, controls all connected nodes;bluechi-agent
services, with one running on each managed node, acts as the agent talking locally to systemd to manage services; andbluechictl
command line program, is meant to be used by administrators to test, debug, or manually manage services across nodes.
The simplified diagram shown below illustrates how BlueChi can be deployed and executed on the Red Hat In-Vehicle OS in the AWS Cloud.
Figure 1: Bluechi Architecture
In this diagram, we deploy Red Hat In-Vehicle OS and BlueChi to simulate an automotive workload running in the cloud. This BlueChi and Red Hat In-Vehicle Operating System environment can be deployed into AWS using Terraform script. As shown in the diagram, we deploy two Amazon EC2 instances, each running a special Red Hat In-Vehicle Operating System image. One instance serves as the Controller Node and the other as an Agent Node.
The Controller node runs bluechi-controller
service. The agent node runs bluechi-agent
service. On agent node(s) the bluechi agent
agent service connects to the controller via D-Bus over TCP (default port 2020). Once connected, the agent registers with the controller, receives requests from the controller, and reports local state changes to the controller.
The example in this post demonstrates running one instance of the Eclipse Bluechi Agent on an Amazon EC2 instance. Additionally, it can be installed on multiple Amazon EC2 instances, as well as on instances of Red Hat In-Vehicle OS running on vehicles.
Eclipse BlueChi is meant to be used in conjunction with a state manager (a program or person) knowing the desired state of the system(s). This design choice has a few consequences that should be considered:
- BlueChi itself does not know the desired final state of the system(s), it only knows how to transition between states, i.e., how to start, stop, restart a service on one or more nodes.
- BlueChi monitors and reports changes in services running, so that another application like a state manager is notified when a service stops running or when the connection to a node is lost, but BlueChi itself does not act on these notifications.
- BlueChi does not handle the “initial setup” of the system, it is assumed that the system boots in a desired state and BlueChi handles the transitions from this state.
In this post, we use Bluechi to start a process on a remote node. In the next part of the blog series, we’ll show how you can launch containers on each remote node using Podman. Podman is a lightweight yet feature-rich and secure tool for running containers. Since vehicles generally need to run fewer containers than a datacenter, still operate under more stringent standards for security and reliability, the authors of this post chose Podman as their container platform.
Deployment
Prerequisites
This BlueChi and Red Hat In-Vehicle OS environment can be deployed into AWS using Terraform.
You’ll need the following before proceeding:
- Terraform v1.6.2 (or higher)
- AWS CLI 1.20+ or 2.13+ (configured with proper AWS credentials)
Keep in mind that Amazon EC2 instances used in this deployment will incur hourly charges.
Clone the code repository:
git clone https://github.com/aws-samples/containers-blog-maelstrom.git
cd containers-blog-maelstrom/rhivos-bluechi
Currently, Automotive Stream Distribution AMIs are only available in the AWS US East (Ohio) region. Please configure your cli to us-east-2 region before proceeding.
In variables.tf
, update the key_name
variable to an Amazon EC2 keypair in AWS US East (Ohio) region.
Deploy the Terraform template:
terraform init
terraform plan
terraform apply
The terraform template will deploy two nodes:
1. Controller node runs on an x86 t3a.nano instance
2. Agent runs on an ARM t4g.nano instance
SSH into the Controller
Set the AWS Region:
export AWS_REGION=${AWS_REGION:=us-east-2}
Get the IP of the Controller node:
CONTROLLER_PUBLIC_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=AutoSD_Manager" \
"Name=instance-state-name,Values=running" \
--query 'Reservations[*].Instances[0].PublicIpAddress' \
--output text \
--region $AWS_REGION)
SSH into the Controller node:
ssh -i <Your Keypair.pem> ec2-user@${CONTROLLER_PUBLIC_IP}
We’ve configured to install Bluechi on a Red Hat In-Vehicle OS Auto SD AMI using Amazon EC2 user-data script. Verify that the user-data script has completed.
sudo tail /var/log/cloud-init-output.log
When the script finishes, Cloud-init will add a log entry as shown below:
It takes about five minutes for the user-data script to finish on a T4.nano instance. Once the user-data script finishes, you can list all nodes:
sudo bluechictl status
The output should show two nodes, the agent and the controller itself.
Use Bluechi to start a service on managed node
Bluechi allows you to manage systemd services across multiple hosts (Amazon EC2 instances in this post’s context). It integrates with systemd via its D-Bus API and relays D-Bus messages over TCP. When agent and controller nodes start, the user data script discovers agent’s hostname and adds it to bluechi-controller’s configuration file located at /etc/bluechi/controller.conf.d/1.conf
. Here is the sample of the controller configuration:
Using bluechictl
you can enable, disable, start, and stop services running on agents. Let’s explore this using an example.
The agent node installs httpd
service at instance startup. You can check the status of the service using bluechictl
from the controller:
sudo bluechictl status <agent hostname> httpd.service
If we try to access the httpd service running on agent, you’ll get an error because the service doesn’t start automatically.
Let’s start httpd on the node and try again:
sudo bluechictl start <agent hostname> httpd.service
Accessing the web server on the agent node should work now:
curl <agent hostname>
Using bluechictl
we enabled httpd service on the agent node. As we’ll show in the second part of this blog series, you can also start and stop containers on the agent node.
Cloud Simulation
Because the deployment of BlueChi on AWS can be duplicated on actual vehicles also running Red Hat In-Vehicle OS, it enables us to create a simulation in the cloud. With this simulation, automotive developers and QA/validation engineers can develop, test and simulate vehicle workloads in the cloud prior to the arrival of the actual vehicle hardware. This allows them to take a shift-left approach, which enables software development earlier in the process before hardware is available. Final testing is then done later with actual hardware. Delays in hardware availability will then no longer be a blocker for software development.
Cleanup
Navigate to rhivos-bluechi
directory and run the command below to remove resources deployed for this blog post:
terraform destroy
Summary and Conclusion
In Part 1 of this blog post, we demonstrated how the Eclipse BlueChi systemd
controller can be installed on the Red Hat In-Vehicle OS running in the AWS cloud. With BlueChi automotive developers and engineers can develop, test and simulate different types of workloads on containers both in the cloud and on vehicles. In Part 2 of the blog series, we will show how Eclipse BlueChi and the Red Hat In-Vehicle OS in the cloud can be used by automotive developers and engineers to develop, test and deploy containers that meet compute requirements from different ADAS components.