Automotive Software Process Improvement and Capability dEtermination (ASPICE) is a standard made by german car makers. It provides rough guidelines to improve your software development processes and to assess suppliers. This means that practically the whole european automotive industry must follow ASPICE and if you want to know how software development works there it provides a good broad overview.
AutomativeSPICE is derived from the generic SPICE (ISO/IEC 15504) standard. While there are other instances, ASPICE seems to be the only one which got any traction. You can find a lot of ASPICE material on the internet unrestricted by paywalls.
It builds on the V-Model which means for every process from requirements to source code there is a corresponding test. The general idea is this sequence of processes:
- Eliciting requirements from the customer. Customer requirements usually are a mess. The challenge is to reject and negotiate to clean this up.
- Your project maps this into system requirements. This step is necessary to restructure customer requirements into a structure you can work with. It also provides a place to include requirements from other stakeholders.
- A system architect breaks down the requirements into logical services. This includes design decisions like what to do in hardware, what in software, where to run what, and how they communicate.
- For each software service, you derive software requirements from the system requirements. You can usually distinguish system and software requirements by their wording. Something concerning "the car" is a system requirement, while software requirements are often about input and output signals of services.
- A software architect breaks down the software requirements of a service into units. Since we focus on a specific device now, we must manage the available resources (memory, CPU time, etc).
- We design and implement each software unit. This can happen in code or with more abstract models (usually of state machines) which transformed into code.
This is the "left side of the V". On the right side the corresponding test processes from bottom to top are:
- Unit tests for the design. Does the code match the design? Are non-functional requirements (like not crashing) fulfilled?
- Integration tests for the software architecture. Does the composition of units into a service still work?
- Software qualification tests for the software requirements. Does a service match its requirements? So far, there was no need to use the actual hardware platform since we only test the software.
- System integration tests for the system architecture. Composing all the services into the full system, does it work?
- System qualification tests for the system requirements. Does the whole system/car match the requirements?
- Acceptance test done by the customer.
In addition to these V-model processes there are also supporting and management processes. This includes stuff like archiving and having a plan.
That does look like a heavyweight process and it usually is when instantiated by big companies. However, ASPICE itself is generic and does not specify concrete tools or methodologies. Maybe you use UML, maybe not. Maybe you track thousands of issues in Jira, maybe twenty post-its on a wall. You can do the steps in a lightweight way and still be ASPICE compliant. All the steps look useful to me, so if you ask me what in ASPICE you could skip: I don't know. People may try to merge steps, but this usually turns out to be a problem later.
You can be agile with ASPICE (though other requirements like ISO 26262 are to some degree a blocker). An agile approach would be to start with very few requirements but you still do the whole V.
An ASPICE assessment results in ratings of multiple processes in levels 0 to 5 from "not achieved" (N) to "fully achieved" (F).
- Level 0 means your process can achieve the work products ASPICE defines (source code, requirements, architecture description, test reports, etc) at most "partially" (P). You have more basic tasks than to care about your ASPICE assessment.
- Level 1 means you "largely" (L) are able to produce the specified work products. So you might have gaps here or there, but you get through the whole V.
- Level 2 means you are fully capable of producing the work products and you can largely manage the processes by having a goal, checking progress, and reacting in danger of missing the goal. The hard part of achieving level 2 is usually not the management part but the "fully level 1" part.
- Level 3 means your organization has centralized standards how you do things and your project follows them. Many customers only want you to reach level 2 because that suggests you handle their project well. Level 3 is relevant if your customer wants promises for follow-up projects.
- Levels 4 and 5 is practically irrelevant. They aim to make your organization-processes. more "predicatable" and "innovating".
ASPICE contains no way to aggregate the results across processes. but often people aggregate with the minimum. This means to claim "ASPICE level 2", you need all (assessed) processes "fully achieved on level 1" and "largely or fully achieved on level 2". The actual result is the rating (N/P/L/F) for each assessed process and each level. The scope is usually reduced to only the relevant processes and levels. Most common is the "VDE scope" which includes all the V and some of the supporting processes.
Since my audience is mostly developers as far as I know, let us look deeper into the process where code is produced "SWE.3 Software Detailed Design and Unit Construction". ASPICE defines a set of base practices and this is roughly the list of questions an assessor will ask.
- SWE.3.BP1: For each unit, do you have a detailed design which respects all functional and non-functional requirements?
- SWE.3.BP2: What are the interfaces of the software unit?
- SWE.3.BP3: Show documentation about the dynamic behavior of a unit!
- SWE.3.BP4: How did you evaluate your detailed design for "interoperability, interaction, criticality, technical complexity, risks and testability"?
- SWE.3.BP5 Traceability: Which detailed design belongs to which unit? Where is the unit in the software architecture? Which software requirements does the unit satisfy?
- SWE.3.BP6: How do you keep code, design, architecture, and requirements consistent?
- SWE.3.BP7: If you change design or code, whom must you notify and how?
- SWE.3.BP8: Show me the code!
Traceability is a big topic which occurs across all processes in the V as it connects the work products. It becomes important when the customer claims that you do not fulfill one of their requirements. It helps you to quickly drill down to all parts used to satisfy the requirement. There you either fix the problem or trace the code back to another customer requirement to reveal an inconsistency. Also, traceability enables you measure the project progress by keeping track on which requirements are done.
A weakness of ASPICE is that an assessment only rates a single snapshot in time. If you assess a supplier that is good enough. To actually reap the benefits of doing it right, you must follow ASPICE from the beginning. There seems to be something about large projects or large organizations that spoils the incentives to do so.
Before I joined the automotive industry, I knew the Open Source ways of software development. It is interesting to compare them.
There are no requirements for non-commercial Open Source software At least not explicitly. Maybe you can consider the bug tracker considered as a partial form of requirements. Maybe the documentation contains snippets. This whole requirements and traceability effort is only necessary because of the customer-supplier relationship. There is money in professional software development this means legal disputes sometimes. Automotive software is usually safety-relevant which also means legal disputes sometimes. In contrast, Open Source explicitly comes "without any express or implied warranties".
Architectural and design decisions are important for all software. However, in Open Source they are rarely documented.
Testing is just as important for Free Software but there are no formal requirements or architecture or detailed design to test against. This means developers create tests against imaginary requirements or open standards. Thus, the tests become the requirements. Merging requirements and tests is a possibility for commercial development as well. I can imagine that we write requirements in a formal language such that tests can be generated automatically. At least most of the requirements.
Thinking about such process stuff shows the complexity that is introduced if you are paid for software development and liable for the resulting product.