Event Chains in Vehicle SOAs
Last updated
Last updated
In traditional automotive systems, event chains are typically viewed from an embedded perspective, focusing solely on on-board systems deeply integrated with hardware components. These event chains involve tightly coupled ECUs (Electronic Control Units) that execute functions based on sensor inputs and actuator commands within the vehicle's hardware boundaries.
In contrast, Service-Oriented Architectures (SOA) in Software-Defined Vehicles (SDVs) extend the concept of event chains beyond the vehicle, incorporating both on-board and off-board components. This creates an end-to-end event processing framework where microservices in the cloud interact with microservices on the vehicle, enabling features such as remote diagnostics, over-the-air updates, and cloud-enhanced functionalities. These interconnected event chains are critical for enabling dynamic, scalable, and flexible vehicle services.
So how are events processed by microservices in a vehicle SOA? Automotive microservices can be built using various system models, including:
Mathematical Models: These models simulate complex systems and are translated into executable code.
State Models: They describe state transitions using structured tools, ideal for handling systems like vehicle doors or power management.
Handcrafted Code: Developers write custom code to implement specific features.
AI Models: These models perform inference tasks, enabling advanced features such as image recognition and predictive maintenance.
Interactions between microservices in an SOA environment occur through event chains or call chains:
Event Chains: Microservices interact asynchronously, triggering events without waiting for responses.
Call Chains: Microservices invoke one another synchronously, waiting for responses before proceeding.
These chains allow complex functionalities, such as a passenger welcome sequence, to be built by orchestrating several microservices.
Microservices in SDVs rely on different execution environments, such as:
Microcontrollers: Low-cost, real-time processors used for safety-critical tasks like braking and airbag deployment.
Microprocessors: High-performance CPUs used for AI tasks, image processing, and infotainment.
FPGAs: Programmable logic arrays for specialized tasks requiring high-speed processing.
Each execution environment can have a dedicated, specific operating systems and middleware, such as real-time OS for microcontrollers and Linux-based systems for microprocessors.
Microservices in vehicle SOA can be implemented using several distinct models, depending on the nature of the required functionality. One straightforward approach is using handcrafted code, where a developer writes custom code to implement the desired service. For example, a microservice could access a vehicle’s internal database, perform computations based on retrieved data, and return results such as processing sensor inputs or managing user preferences.
Another implementation model involves AI-powered microservices. In this case, a trained AI model is integrated into a microservice that performs inferences using real-world data. Consider the passenger welcome sequence: the system could employ an AI-based microservice to analyze video data from rear-view cameras, detect incoming bicycles, and identify potential hazards.
Mathematical models provide another means of implementation. These models handle complex computations, such as projecting a bicycle's trajectory based on image analysis data. An AI model would first detect and track the bicycle’s movement through a series of video frames. The corresponding mathematical model would then calculate the future trajectory, helping determine whether opening the vehicle door would be safe.
State models are particularly useful for managing finite state transitions within the vehicle. For example, a microservice could handle the various states of the vehicle’s door, including locked, unlocked, open, and closed. This state management ensures that logical combinations of door and window positions are consistent and safe during vehicle operation.
By combining these models—handcrafted code, AI inferences, mathematical computations, and state management—vehicle SOA systems can support complex functionalities like the passenger welcome sequence. Each implementation type plays a specific role, creating a robust, modular, and scalable system capable of handling sophisticated automotive tasks.
To understand how the open-door functionality is implemented within the embedded environment, let’s examine an Autosar Classic-based architecture. At its core lies the microcontroller, a hardware component that includes the CPU, memory, and various peripherals essential for running the vehicle’s embedded software. This microcontroller serves as the execution platform for the embedded system.
To ensure the software remains portable and adaptable across different microcontrollers, the Microcontroller Abstraction Layer (MCAL) standardizes the interface between the hardware and the higher-level software layers. This abstraction layer simplifies hardware access by encapsulating low-level hardware details, enabling software portability.
The ECU Abstraction Layer sits above the MCAL, providing a unified interface to ECU-specific hardware components like door sensors and actuators. This layer abstracts hardware-specific implementations, making it easier for higher software layers to interact with various components regardless of their unique technical details.
Above the ECU Abstraction Layer is the Service Layer, which offers system-wide services such as communication protocols, diagnostics, and memory management. This layer ensures seamless interaction across various ECU functions, independent of the specific hardware involved.
Specialized hardware components like LiDAR or battery management systems may require custom drivers known as complex device drivers. These extend the standard Autosar framework by supporting specialized hardware functions beyond what Autosar natively provides.
At the top of the software stack are the Runtime Environment and the Application Layer. The Runtime Environment acts as middleware, facilitating communication between user-defined applications and the underlying software components. The Application Layer contains vehicle-specific functionality, such as managing door locks, windows, and mirrors.
For the open-door example, a state model in the Application Layer could manage different door states such as locked, unlocked, open, and closed. This model would coordinate interactions with lower software layers, ensuring that door operations comply with defined safety and operational rules. This structured approach, supported by Autosar Classic’s modular architecture, makes managing complex vehicle functionalities both scalable and maintainable.
To illustrate the end-to-end architecture of a vehicle system, let’s consider a typical use case involving a smartphone-based app triggering the vehicle's door-opening event.
The process begins with the smartphone, where the app runs on a standard operating system and application stack. When the user initiates the door-opening command, the app communicates with a cloud-based microservice, typically hosted in a cloud runtime environment also powered by a standard OS and application stack.
From the cloud, the command transitions to the vehicle's on-board system, where a high-performance compute environment awaits. This environment often runs a virtualized operating system capable of hosting containerized microservices. In this setup, microservices execute within a container runtime managed by Kubernetes-like orchestration platforms.
The next step involves message processing through a middleware service, such as the KUKSA message broker. This broker facilitates secure and reliable communication between cloud and on-board systems.
Once the command is validated, the vehicle's on-board system performs a series of safety checks before unlocking or opening the door. The safety checks begin by verifying whether the vehicle is stationary, a requirement enforced through integration with the Autosar platform. If the vehicle is not moving, the system activates the rear camera, using AI-powered image recognition to detect any incoming objects or pedestrians that might be at risk during the door-opening process.
Next, the side camera scans for obstacles close to the vehicle's sides, ensuring that the door won’t hit anything upon opening. If all checks pass, the system communicates with the responsible ECU via the Autosar-compliant communication layer. The ECU sends the final command to the vehicle’s door actuator, unlocking and opening the door as requested.
In advanced architectures, these event chains seamlessly combine cloud-based and on-board operations, leveraging a mix of microcontrollers and microprocessors. For safety-critical tasks, such as emergency braking or actuator control, microcontrollers certified for ASIL D-level functions ensure maximum reliability. Meanwhile, microprocessors handle complex AI-enabled tasks like perception, sensor fusion, and path planning, even though these components often operate under less stringent QM or ASIL A ratings.
This architecture underscores the complexity of modern vehicle SOAs, where cloud, edge, and embedded systems must work together, ensuring safety, functionality, and a responsive user experience.