Skip to main content

Milestone 3

Milestone 3 Status

This milestone was completed March 31, 2026.

A lightweight VIRQ (Virtual IRQ) mapping / routing / couriering subsystem is introduced in OpenSBI to support paravirtual / trap-and-emulate style interrupt dispatching to S-mode payloads, while keeping host physical interrupts handled in M-mode. Implementation is complete and validated on QEMU virt (-M virt,aia=aplic).

This work for this milestone was initially released to RISE on March 15, 2026. Follow-up discussions were conducted to review the design and identify areas for refinement. Based on feedback from the RISE team, additional improvements were implemented to address gaps in domain context handling and system integration. This includes the introduction of domain context switching support within the VIRQ subsystem when the target domain differs from the current one, as well as updates to the DeviceTree binding to expose VIRQ and HWIRQ mapping through standard Linux interrupt extended properties. In addition, the courier notification mechanism was updated from SSE to SEIP to align with the revised design.


Milestone Description

This milestone is about interrupt driven software context switching. Changes in OpenSBI are required to use external interrupts as triggers for software domain context switching to ensure that an external interrupt is processed by the software domain that owns it. OpenSBI will perform the decoding of the interrupt and relay it to the appropriate software domain via a context switch.

Overview

A lightweight VIRQ (Virtual IRQ) mapping / routing / couriering subsystem is introduced in OpenSBI to support paravirtual / trap-and-emulate style interrupt dispatching to S-mode payloads, while keeping host physical interrupts handled in M-mode.

VIRQ layer provides:

  • stable and scalable per-MPXY(Message Proxy)-channel mapping between HWIRQ (Hardware IRQ) and VIRQ,
  • DeviceTree-driven domain routing rules,
  • Per-(domain,hart) pending queue couriering and state management,
  • SEIP (Supervisor External Interrupt Pending)-based notification,
  • Domain-aware VIRQ couriering that allows switching context to the target domain, and
  • ECALL(Environment Call) extension to pop and complete an enqueued VIRQ.

A test routine is provided to demonstrate the full coverage of the complete IRQ handling path for UART RX (receiver) (HWIRQ 10) from M-mode to an S-mode payload (bare-metal application) using:

  • DeviceTree routing rules population and VIRQ handler registration during cold boot
  • HWIRQ / VIRQ mapping and routing to the destination domain when a HWIRQ is asserted during run-time
  • HWIRQ masking, VIRQ enqueue and SEIP notification
  • Domain context switch will be carried on if the target domain is not the current one
  • The S-mode payload traps the SEIP, pops the pending VIRQ via ECALL, and runs the interrupt service routine (ISR), then
  • completes the interrupt via ECALL, followed by unmasking the HWIRQ, allowing further interrupts to occur
  • When all pending VIRQs are handled, context switches and returns to the previous domain if applicable

Background and Motivation

In current RISC-V systems using OpenSBI, host interrupts (HWIRQs) are typically handled in M-mode (Implemented by RP016-M2). S-Mode payloads (Linux, RTOS, or bare-metal applications) rely on standard interrupt delegation or platform-specific mechanisms. Domain isolation allows partitioning harts and resources, but interrupt routing across domains remains coarse-grained. To support paravirtualization and trap-and-emulate interrupt models while M-mode retains ownership of physical IRQ lines, requires a mechanism to:

  • keep physical IRQ ownership in M-mode.
  • route selected interrupts to their destination domains.
  • deliver them in a controlled, queue-based manner.
  • allow S-mode payloads to explicitly acknowledge and complete interrupt delivery. These requirements drove the design of the VIRQ subsystem that can be cleanly integrated with the existing irqchip abstraction in OpenSBI.

Design Goals

  1. HWIRQ domain routing rules should be part of the domain configuration and flexible to extend.
  2. S-mode does not need to know physical interrupt topology.
  3. VIRQ pending queue should be managed per-domain and per-hart.
  4. All enqueued VIRQs should be handled in a FIFO strategy (First-come-first-served).
  5. Domain context switch is required if the target domain differs from the current domain.
  6. The design and implementation can be demonstrated and verified using UART RX (receiver).

Non-goals

  • Per-hart arbitrary IRQ priorities.
  • RPMI-SYSIRQ-based delivery (to be delivered later).

High-Level Architecture

Design Overview

The VIRQ layer is composed of 4 major parts:

  1. HWIRQ / VIRQ mapping and allocation
    • Provides a stable per-MPXY-channel mapping between a host interrupt endpoint (chip_uid, hwirq) and a VIRQ number.
    • VIRQ number allocation uses a growable bitmap.
  2. HWIRQ / domain routing rules
    • Routing rules are described in DeviceTree using Linux IRQ standard property interrupts-extended under node /chosen/opensbi-domains/rpmi_sysirq_intc which emulates an MPXY channel for a particular domain by opensbi,mpxy-channel-id.
    • Each entry is converted and cached as a routing rule.
    • Default behavior for compatible fallback purpose: if an asserted HWIRQ does not match any routing rule, it will be routed to the root domain (MPXY channel 0).
  3. Per-(domain,hart) pending queue couriering
    • Each domain maintains a per-hart ring buffer queue of pending VIRQs.
    • On an asserted HWIRQ, the registered VIRQ handler maps (chip_uid,hwirq) to a VIRQ number, looks up for destination domain via routing rules, masks the host HWIRQ (to avoid level-trigger storms), pushes the VIRQ into the per-(domain,hart) pending queue and set SEIP to notify the S-mode payload.
    • During VIRQ dispatching, domain context switch is available for entering to and returning from the target domain, if it defers from the current domain.
  4. VIRQ ECALL extension
    • ECALL extension provides pop and complete functionality to retrieve and finish the next pending VIRQ from the per-(domain,hart) queue for an S-mode payload trapped by SEIP.

HWIRQ / VIRQ Mapping and Allocation

VIRQ mapping model: VIRQ number allocation via growable bitmap - capacity expands as needed, memory usage scales with the number of active mappings.

  • Forward lookup [(chip_uid,hwirq) to VIRQ] is via dynamic vector of entries.
  • Reverse lookup [VIRQ to (chip_uid,hwirq)] is via a chunked table allocated on demand.

The mapping is per-MPXY-channel, that means each MPXY channel owns one map.

The mapping is stable across runtime (until reboot) and it is chip-agnostic via chip_uid of an irqchip instance (works for APLIC, PLIC, IMSIC, etc.), thus S-mode does not need to know physical interrupt topology, and routing and queueing logic remains generic.

/* Entry of reverse mapping table: represents (chip_uid,hwirq) endpoint */
struct virq_entry {
u32 chip_uid;
u32 hwirq;
};

/* Chunked reverse mapping table: VIRQ -> (chip_uid,hwirq) */
struct virq_chunk {
struct virq_entry e[VIRQ_CHUNK_SIZE];
};

struct map_node {
u32 chip_uid;
u32 hwirq;
u32 virq;
};

struct sbi_virq_map {
spinlock_t lock;

/* allocator bitmap */
unsigned long *bmap;
u32 bmap_nbits; /* virq range: [0..nbits-1] */

/* reverse table: virq -> endpoint */
struct virq_chunk **chunks;
u32 chunks_cap; /* number of chunk pointers */

/* forward table: vector of mappings, linear search */
struct map_node *nodes;
u32 nodes_cnt;
u32 nodes_cap;
};

struct sbi_virq_map_list {
u32 channel_id;
struct sbi_virq_map map;
};

A public API for VIRQ mapping is available to:

  • initialize allocator;
  • allocate a new mapping or return an existing mapping;
  • perform forward lookup;
  • perform reverse lookup;
  • unmap entries.

See the section Mapping API for the detailed programming interface.

Routing rules are described in DeviceTree property interrupts-extended under node /chosen/opensbi-domains/rpmi_sysirq_intc.

For example:

rpmi_sysirq_intc: interrupt-controller {
compatible = "opensbi,mpxy-sysirq";
interrupt-controller;
#interrupt-cells = <1>;
interrupts-extended = <&aplic HWIRQ IRQ_TYPE>, // virq 0
<&aplic HWIRQ IRQ_TYPE>; // virq 1
opensbi,mpxy-channel-id = <4>; // per system design
opensbi,domain = <&domain1>;
};

VIRQ numbers are allocated from zero, implicit from the order of the entries within the interrupts-extended property, and each pair <&aplic HWIRQ IRQ_TYPE> internally stored as:

struct sbi_virq_route_rule {
u32 hwirq;
struct sbi_domain *dom; /* owner domain */
u32 channel_id; /* MPXY channel */
};

When a HWIRQ is asserted, the VIRQ layer:

  1. check routing rules.
  2. if matched → routes HWIRQ to the destination domain.
  3. if no match → routes HWIRQ to the root domain.

This ensures:

  • backward compatibility with the domains without an explicit routing rule,
  • safe default behavior.

A public API for VIRQ routing is available to:

  • reset routing state;
  • add a new routing rule to a domain;
  • lookup the destination domain for a given HWIRQ.

See the section Routing API for the detailed programming interface.

Per-(domain,hart) Pending Queue Couriering

Each domain maintains a per-hart ring buffer queue of pending VIRQs with a conceptual structure:

domain
├── hart0 queue
├── hart1 queue
└── ...

When a HWIRQ is asserted, after being mapped into a VIRQ (see section HWIRQ / VIRQ Mapping and Allocation and routed to a destination domain (see section HWIRQ / Domain Routing Rules, it will:

  1. mask the physical HWIRQ,
  2. push VIRQ into (domain, hart) queue, and
  3. set SEIP notification.

Data structure using for per-domain VIRQ state management:

/*
* Per-(domain,hart) VIRQ state.
*
* Locking:
* - lock protects head/tail and q[].
*
* Queue semantics:
* - q[] stores VIRQs pending handling for this (domain,hart).
* - enqueue is performed by M-mode according to route rule
* populated from DT.
* - pop/complete is performed by S-mode payload running in the
* destination domain on the current hart.
* - chip caches the irqchip device for unmasking on complete.
*/
struct sbi_domain_virq_state {
spinlock_t lock;
u32 head;
u32 tail;

/* Pending VIRQ ring buffer. */
struct {
u32 virq;
u32 channel_id;
struct sbi_irqchip_device *chip;
} q[VIRQ_QSIZE];

/* Return to previous domain after VIRQ completion. */
bool return_to_prev;
};

/*
* Per-domain private VIRQ context.
*
* Attached to struct sbi_domain and contains per-hart states.
*/
struct sbi_domain_virq_priv {
/* number of platform harts */
u32 nharts;

/* number of allocated per-hart states */
u32 st_count;

/*
* per-hart VIRQ state pointer array (indexed by hart index)
*/
struct sbi_domain_virq_state *st_by_hart[];
};

A public API for VIRQ couriering is available to:

  • enqueue a VIRQ to the destination domain / hart;
  • pop the next pending VIRQ for the destination domain / hart;
  • complete a previously couriered VIRQ for the destination domain / hart;
  • courier handler for registration as an irqchip callback.

See the section Courier API for the detailed programming interface.

VIRQ ECALL Extension

A vendor-defined SBI extension providing:

  1. POP Retrieve the next pending VIRQ from (domain, hart).
  2. COMPLETE Marks VIRQ as handled and unmasks the physical HWIRQ.

VIRQ ECALL extension definitions:

/* Vendor extension base range is defined by the SBI spec. Choose a private ID. */
#define SBI_EXT_VIRQ 0x0900524d

/* Function IDs for SBI_EXT_VIRQ */
#define SBI_EXT_VIRQ_POP 0
#define SBI_EXT_VIRQ_COMPLETE 1

/*
* SBI_EXT_VIRQ_POP
* Returns:
* a0: SBI error code (0 for success)
* a1: next pending VIRQ (VIRQ_INVALID if none pending)
*/

/*
* SBI_EXT_VIRQ_COMPLETE
* Input:
* a0: VIRQ to complete
* Returns:
* a0: SBI error code (0 for success)
*/

S-mode Handling (bm-app Test Payload)

The bare-metal application changes for this milestone include:

  • SEIP setup / enablement
  • UART interrupt enablement
  • SEIP trapper
  • SEIP handler
  • ECALL wrapper for pop / complete a VIRQ
  • simple VIRQ / HWIRQ mapping and lookup for test
  • UART RX (receiver) ISR for testing.

On SEIP trap, bare-metal application trap handler runs, calls VIRQ POP ECALL to get the next pending VIRQ, executes the ISR logic, after finish and clear the device, calls VIRQ COMPLETE ECALL to unmask the physical HWIRQ.

UART RX (Receiver) End-to-End Interrupt Flow

UART RX
 → APLIC
  → IDC.CLAIMI
   → MEIP asserted
    → OpenSBI trap handler
     → irqchip IRQ handler
      → VIRQ registered handler
     → HWIRQ->VIRQ mapping
     → Route VIRQ to the destination domain
     → Enqueue VIRQ, mask HWIRQ
     → SEIP set
     → Context switch (when target dom != current dom)
      → S-mode SEIP trap handler
       → Pop VIRQ from queue via ECALL
       → UART RX ISR
       → clear RX FIFO
       → Complete VIRQ via ECALL (unmask HWIRQ)
      → SEIP clear
      → Context switch (when previous switch occured)

This demonstrates a full interrupt lifecycle:

assert → claim → map → route → enqueue → notify → dequeue → handle → complete


Build Steps and Test Instructions

Testing System Architecture

In our test running on a 4 CPUs QEMU virt system: Hart 0 / 1 for Linux:

  • Linux runs as an S-mode payload of the root domain. Hart 2 for bare-metal applications:
  • Bm-app1 runs as an S-mode payload of domain1.
  • Bm-app2 runs as an S-mode payload of domain2. Hart 3 is free.

The test is in order to demonstrate UART RX (HWIRQ 10) interrupts are mapped / routed / couriered to bare-metal applications properly following the routing rules defined in DeviceTree, without impacts on other interrupt handling in Linux.

                +--------------------+
| OpenSBI |
| (M-mode) |
|--------------------|
| Domain Manager |
| VIRQ mapping |
| VIRQ routing |
| VIRQ couriering |
| SEIP notification |
+---------+----------+
|
+-----------------+------------------+
| |
+----v----+ +---------v---------+
| hart0/1 | | hart2 |
+---------+ +---------+---------+
| root | | domain1 | domain2 |
| Linux | | bm-app1 | bm-app2 |
+---------+ +---------+---------+

Build via Buildroot Project

Get Buildroot source code:

$ git clone https://gitlab.com/riseproject/riscv-optee/buildroot.git -b rp016_m3_virq_v2

Configure Buildroot:

$ cd buildroot
$ make qemu_riscv64_virt_optee_defconfig

Build:

$ make -j$(nproc)

To avoid building errors due to outdated Buildroot native CMakeLists.txt files, if you have a CMAKE version > 3.30 on your host, build with:

$ make -j$(nproc) CMAKE_POLICY_VERSION_MINIMUM=3.5

This will build all of the required components. All build artifacts can be found under output/build.

Running OpenSBI, bm-app and Linux

Start QEMU and launch the bare-metal application and kernel:

./output/images/start-qemu-bm-kernel.sh

By running this script, the test DeviceTree overlay ‘hwirq_bind_domain_linux_bmapp.dts’ will be compiled and applied to the dumped QEMU base DeviceTree before re-running QEMU.

Below is a segment from the DeviceTree overlay that configures hart 2 to domain 1 / 2 and boot bm-app1 and bm-app2 in domain 1 / 2 respectively, with a few routing rules: one for routing HWIRQ 10 (UART RX) to domain 2, while other placeholders for testing purpose:

fragment@0 {
target-path = "/chosen";
__overlay__ {
opensbi-domains {
compatible = "opensbi,domain,config";
...
/*
* Domain instance:
* - possible-harts uses CPU node phandles
*/
domain1: domain1 {
compatible = "opensbi,domain,instance";

/*
* QEMU virt -smp 4 dumpdtb phandles:
* cpu@1 -> 0x05, cpu@2 -> 0x03
*/
possible-harts = <0x05 0x03>;
boot-hart = <0x03>;
...
};

domain2: domain2 {
compatible = "opensbi,domain,instance";

/*
* QEMU virt -smp 4 dumpdtb phandles:
* cpu@1 -> 0x05, cpu@2 -> 0x03
*/
possible-harts = <0x05 0x03>;
boot-hart = <0x03>;
...
};

rpmi_sysirq_intc: interrupt-controller {
compatible = "opensbi,mpxy-sysirq";
interrupt-controller;
#interrupt-cells = <1>;

/*
* VIRQ numbers are allocated from zero, in order.
* Each entry: <&irqchip hwirq flags>
*/
/*
* Base DTB (qemu_linux_test.dtb) phandle:
* /soc/interrupt-controller@c000000 -> 0x09
*/
interrupts-extended =
<0x09 10 4>, /* VIRQ 0: UART RX */
<0x09 20 4>, /* VIRQ 1: test */
<0x09 21 4>; /* VIRQ 2: test */

/* MPXY channel id allocated by system designer */
opensbi,mpxy-channel-id = <4>;

/* Route VIRQs to domain2 */
opensbi,domain = <&domain2>;
};
};
};
};

fragment@1 {
target-path = "/cpus/cpu@1";
__overlay__ {
opensbi-domain = <&domain1>;
};
};

fragment@2 {
target-path = "/cpus/cpu@2";
__overlay__ {
opensbi-domain = <&domain1>;
};
};

When the following output appears on the console, QEMU is waiting for a pending connection.

qemu-system-riscv64: -chardev
socket,id=vc0,host=127.0.0.1,port=64321,server=on,wait=on: info: QEMU
waiting for connection on: disconnected:tcp:127.0.0.1:64321,server=on

Connect to QEMU via a new console by using telnet to port 64321:

$ telnet 127.0.0.1 64321

The Linux logs will appear on the new console, while the OpenSBI and bare-metal application logs will appear on the original console.

In the OpenSBI / bare-metal console,

APLIC: Set target IDC 2 for hwirq 10
APLIC: Set target IDC 2 for hwirq 20
APLIC: Set target IDC 2 for hwirq 21
APLIC: irqchip aplic cold init done
[VIRQ] Init per-domain VIRQ courier state for domain2
[VIRQ] number of harts: 4
[VIRQ] Init per-domain VIRQ courier state for domain1
[VIRQ] number of harts: 4
[VIRQ] set mapping: (hwirq 10, chip_uid 8176) -> VIRQ 0
[VIRQ] add route rule: hwirq 10 route to dom (domain2)
[VIRQ] set mapping: (hwirq 20, chip_uid 8176) -> VIRQ 1
[VIRQ] add route rule: hwirq 20 route to dom (domain2)
[VIRQ] set mapping: (hwirq 21, chip_uid 8176) -> VIRQ 2
[VIRQ] add route rule: hwirq 21 route to dom (domain2)

shows that VIRQ initialization is done for both domain 1 / 2, and routing rules are detected and populated from the rpmi_sysirq_intc node, while corresponding mappings are allocated:

  • HWIRQ 10 (UART RX) maps to VIRQ 0, routes to domain 2
  • HWIRQ 20 (for test) maps to VIRQ 1, routes to domain 2
  • HWIRQ 21 (for test) maps to VIRQ 2, routes to domain 2
[ECALL VIRQ] register VIRQ ecall extensions, ret=0
...
Standard SBI Extensions : time,rfnc,ipi,base,hsm,srst,pmu,dbcn,fwft,legacy,dbtr,sse,virq

shows VIRQ ECALL extension is registered successfully.

Hart 2 initially boots bm-app1 in domain1 and enables the SEIP on hart 2. It is ready for testing by pressing keyboard keys when log continues with:

BM-APP (domain 1, hart 2): Welcome to OpenSBI bare-metal app!
BM-APP (domain 1, hart 2): SBI Spec Version: 3.0
BM-APP (domain 1, hart 2): SBI Implementation: OpenSBI
BM-APP (domain 1, hart 2): OpenSBI Version: 1.8
BM-APP (domain 1, hart 2): Init timer successfully 10000000 ticks/s
BM-APP (domain 1, hart 2): SEIP enabled, stvec=88000b08
BM-APP (domain 1, hart 2): Enable UART RX interrupt
BM-APP (domain 1, hart 2): Setup done. Type keys now to trigger UART interrupts.

By typing any keyboard keys (for example, ‘a’), you should see log messages below, which indicates a successful APLIC IRQ handling lifecycle: text assert → claim → map → route → enqueue → notify → dequeue → handle → complete.

HWIRQ 10 (UART RX) is mapped, routed and enqueued to q[domain2,hart2], after a context switch on hart 2 from domain 1 to domain 2, it is eventually popped and handled by bm-app2 which runs in domain 2, continues with a context switch back to domain1.

The continue logs depict a complete APLIC IRQ handling lifecycle, which can be broken down as below steps:

  1. HWIRQ 10 asserted and VIRQ courier handler invoked by IRQCHIP callback.
[APLIC] IDC_TOPI_ID from CLAIMI (hwirq) 10
[IRQCHIP] Calling handler for hwirq 10
[IRQCHIP] Enter hwirq 10 raw handler
[IRQCHIP] Calling hwirq 10 raw handler callback
[VIRQ] virq courier hart2 curr=domain1 target=domain2 hwirq=10
  1. A VIRQ is mapped, routed and enqueued.
[VIRQ] found existing mapping: (hwirq 10, chip_uid 8196) -> virq 0
[VIRQ] route hwirq 10, chip_uid 8196 -> dom (domain2), channel 4, VIRQ 0
[VIRQ] Get queue for (domain,hartidx): (domain2,2)
[VIRQ] Push VIRQ 0 to queue
  1. M-mode sets SEIP notification and calls IRQCHIP EOI.
[IRQCHIP] Set mip.SEIP (mip before=0x20, after=0x220)
[VIRQ] S-mode notified
[IRQCHIP] Calling EOI of hwirq 10
[APLIC] Enter regitered EOI of hwirq 10
  1. Bm-app1 (domain 1) is trapped by SEIP and issues POP ECALL.
BM-APP (domain 1, hart 2): [VIRQ] SEIP handler trapped
BM-APP (domain 1, hart 2): [VIRQ] Pop IRQ via ecall
  1. M-mode switches context to the target domain (domain 2) in the VIRQ ECALL handler.
[ECALL VIRQ] VIRQ ecall handler, funcid: 0
[VIRQ] virq pop switching hart2 domain1 -> domain2
[domain] switch hart2 domain1 -> domain2 (mideleg=0x1666)
  1. Bm-app2 (domain 2) starting (if it is its first entry).
[domain] first-entry domain2 on hart2 (mideleg=0x1666)
BM-APP (domain 2, hart 2): Welcome to OpenSBI bare-metal app!
BM-APP (domain 2, hart 2): SBI Spec Version: 3.0
BM-APP (domain 2, hart 2): SBI Implementation: OpenSBI
BM-APP (domain 2, hart 2): OpenSBI Version: 1.8
BM-APP (domain 2, hart 2): Init timer successfully 10000000 ticks/s
  1. Bm-app2 (domain 2) is trapped by SEIP and re-issues POP ECALL.
BM-APP (domain 2, hart 2): [VIRQ] SEIP handler trapped
BM-APP (domain 2, hart 2): [VIRQ] Pop IRQ via ecall
  1. M-mode VIRQ ECALL handler pops and returns the next pending VIRQ from q[domain2,hart2].
[ECALL VIRQ] VIRQ ecall handler, funcid: 0
[VIRQ] Get queue for (domain,hartidx): (domain2,2)
[VIRQ] Pop VIRQ 0 from queue
  1. Bm-app2 (domain2) retrieves the VIRQ, invokes ISR and issues COMPLETE ECALL.
BM-APP (domain 2, hart 2): [VIRQ] Pop IRQ:0
BM-APP (domain 2, hart 2): [VIRQ] Handle IRQ:0, hwirq:10
BM-APP (domain 2, hart 2): [UART] Got 'a'(0x61)
BM-APP (domain 2, hart 2): [VIRQ] Complete IRQ via ecall
  1. M-mode VIRQ ECALL handler completes and dequeues the VIRQ.
[ECALL VIRQ] VIRQ ecall handler, funcid: 1
[VIRQ] Get queue for (domain,hartidx): (domain2,2)
[VIRQ] Complete VIRQ 0 from queue
  1. Bm-app2 (domain 2) continues to POP / COMPLETE the next pending VIRQ until M-mode reports an empty queue, followed by SEIP clear and a context switch back to the previous domain (domain 1).
[ECALL VIRQ] VIRQ ecall handler, funcid: 0
[VIRQ] Get queue for (domain,hartidx): (domain2,2)
[VIRQ] VIRQ queue is empty
[VIRQ] return_to_prev after VIRQ complete on hart2
[domain] return hart2 domain2 -> domain1 (mideleg=0x1666)
[domain] switch hart2 domain2 -> domain1 (mideleg=0x1666)
BM-APP (domain 1, hart 2): [VIRQ] No pending IRQ in queue

Linux IRQ Tests

Now, let’s have some tests to prove that the APLIC-DIRECT interrupts without any explicit routing rules to Linux will not be impacted by VIRQ.

Linux runs as the next-stage S-mode payload of the root domain on hart 0 / 1. As a fallback for compatible purposes, by default, without explicit routing rules for the root domain, HWIRQs that are used by Linux will be dispatched to the root domain and handled by Linux as what it was.

In the Linux console, after login as root, check the IRQ status by:

$ watch -n 1 cat /proc/interrupts
Every 1.0s: cat /proc/interrupts 2026-02-23 22:49:38

CPU0 CPU1
10: 1152 1753 RISC-V INTC 5 Edge riscv-timer
12: 16 0 APLIC-DIRECT 33 Level virtio2
14: 512 0 APLIC-DIRECT 7 Level virtio1
15: 271 0 APLIC-DIRECT 8 Level virtio0
16: 0 0 APLIC-DIRECT 11 Level 101000.rtc
IPI0: 62 64 Rescheduling interrupts
IPI1: 236 330 Function call interrupts
IPI2: 0 0 CPU stop interrupts
IPI3: 0 0 CPU stop (for crash dump) interrupts
IPI4: 0 0 IRQ work interrupts
IPI5: 0 0 Timer broadcast interrupts
IPI6: 0 0 CPU backtrace interrupts
IPI7: 0 0 KGDB roundup interrupts

The counters of APLIC-DIRECT should keep incrementing.

Keep testing in the bare-metal application console by pressing different keyboard keys while monitoring the counters in the Linux console, the IRQ status should not be affected.

This shows that the VIRQ mapping / routing / couriering only works for those HWIRQs explicitly bound to a specific domain (UART to domain 2 in our test) via DeviceTree-based routing rules; others will follow the default fallback behaviour and handle by root domain.


Appendix – Programming Interface

Init / Uninit API

Public APIs for VIRQ subsystem initialization:

/*
* Initialize per-domain VIRQ state.
*
* @dom:
* Domain to initialize.
*
* Return:
* SBI_OK on success
* SBI_EINVAL on invalid parameters
* SBI_ENOMEM on allocation failure
*/
int sbi_virq_domain_init(struct sbi_domain *dom);

/*
* Free per-domain VIRQ state.
*
* @dom:
* Free the per-domain VIRQ state.
*/
void sbi_virq_domain_exit(struct sbi_domain *dom);

/*
* Initialize VIRQ subsystem (mapping allocator + route rules).
* Must be called once before parsing sysirq DT nodes.
*
* @init_virq_cap:
* Initial VIRQ bitmap capacity in bits
*
* Return:
* SBI_OK on success
* SBI_EALREADY if called more than once
* SBI_ENOMEM on allocation failure
* Other SBI_E* error codes propagated from mapping init
*/
int sbi_virq_init(u32 init_virq_cap);

/*
* Query whether the VIRQ subsystem is initialized.
*/
bool sbi_virq_is_inited(void);

Mapping API

Public APIs for HWIRQ / VIRQ mapping and allocation:

/*
* Initialize a per-channel VIRQ map.
*
* @channel_id:
* VIRQ space/channel ID (0 is the default channel).
*
* @init_virq_cap:
* Initial capacity in VIRQ bits (e.g., 256). Implementation may grow
* beyond.
*
* Return:
* SBI_OK on success
* SBI_ENOMEM on allocation failure
*/
int sbi_virq_map_init(u32 channel_id, u32 init_virq_cap);

/*
* Create or get a stable mapping for (channel_id, chip_uid, hwirq) ->
* VIRQ.
*
* @channel_id:
* Paravirt channel ID; VIRQ numbering is local to each channel.
*
* @chip_uid:
* Unique 32-bit ID of the host irqchip device.
*
* @hwirq:
* Host HWIRQ number as produced by the irqchip driver.
*
* @allow_identity:
* If true, allocator may attempt VIRQ == hwirq for small ranges.
*
* @identity_limit:
* Upper bound (exclusive) for identity mapping trial: hwirq <
* identity_limit.
*
* @out_virq:
* Output pointer receiving the mapped/allocated VIRQ (0 is valid).
*
* Return:
* SBI_OK on success
* SBI_ENOMEM on allocation failure
* SBI_ENOSPC if allocator cannot allocate
* SBI_EINVAL on invalid parameters
*/
int sbi_virq_map_one(u32 channel_id, u32 chip_uid, u32 hwirq,
bool allow_identity, u32 identity_limit,
u32 *out_virq);

/*
* Force a mapping for (channel_id, chip_uid, hwirq) -> VIRQ.
*
* @channel_id:
* Paravirt channel ID; VIRQ numbering is local to each channel.
*
* @chip_uid:
* Unique 32-bit ID of the host irqchip device.
*
* @hwirq:
* Host HWIRQ number as produced by the irqchip driver.
*
* @virq:
* VIRQ number to assign (0 is valid).
*
* Return:
* SBI_OK on success
* SBI_ENOMEM on allocation failure
* SBI_EINVAL on invalid parameters
* SBI_EALREADY if a different mapping already exists
*/
int sbi_virq_map_set(u32 channel_id, u32 chip_uid, u32 hwirq, u32 virq);

/*
* Ensure VIRQ map capacity for a given channel.
*
* @channel_id:
* Paravirt channel ID.
*
* @min_virq_cap:
* Minimum VIRQ bitmap capacity in bits (will be rounded up).
*
* Return:
* SBI_OK on success
* SBI_EINVAL if the map is not initialized (channel 0)
* SBI_ENOMEM on allocation failure
*/
int sbi_virq_map_ensure_cap(u32 channel_id, u32 min_virq_cap);

/*
* Lookup existing mapping: (channel_id, chip_uid, hwirq) -> VIRQ.
*
* @channel_id:
* Paravirt channel ID; VIRQ numbering is local to each channel.
*
* @chip_uid:
* Irqchip unique id.
*
* @hwirq:
* Host hwirq number.
*
* @out_virq:
* Output VIRQ (0 is valid).
*
* Return:
* SBI_OK if found
* SBI_ENOENT if not mapped
* SBI_EINVAL on invalid input
*/
int sbi_virq_hwirq2virq(u32 channel_id, u32 chip_uid, u32 hwirq,
u32 *out_virq);

/*
* Reverse lookup: (channel_id, VIRQ) -> (chip_uid, hwirq).
*
* @channel_id:
* Paravirt channel ID; VIRQ numbering is local to each channel.
*
* @virq:
* VIRQ number to look up.
*
* @out_chip_uid:
* Output pointer receiving irqchip unique id.
*
* @out_hwirq:
* Output pointer receiving host hwirq number.
*
* Return:
* SBI_OK on success
* SBI_EINVAL if virq is VIRQ_INVALID, out of range, not allocated, or
* reverse entry missing
*/
int sbi_virq_virq2hwirq(u32 channel_id, u32 virq,
u32 *out_chip_uid, u32 *out_hwirq);

/*
* Unmap a single VIRQ mapping and free the VIRQ number.
*
* @virq:
* VIRQ number to unmap.
*
* Return:
* SBI_OK on success
* SBI_EINVAL if virq is invalid or state is inconsistent
*/
int sbi_virq_unmap_one(u32 virq);

/*
* Uninitialize the VIRQ mapping allocator and free all resources.
*
* Notes:
* - This frees bitmap, forward vector, and reverse chunks.
*/
void sbi_virq_map_uninit(void);

### Routing API
Public APIs for HWIRQ / Domain routing rules:
```c
/*
* Reset all HWIRQ->Domain routing rules (frees the rule array).
*
* Typical usage:
* - Called once at cold boot during init before parsing DT
* domains.
*/
void sbi_virq_route_reset(void);

/*
* Add a routing rule: [first .. first+count-1] -> dom.
*
* @dom:
* Target domain that should receive HWIRQs in this range.
*
* @first:
* First HWIRQ number (inclusive).
*
* @count:
* Number of HWIRQs in the range.
*
* Return:
* SBI_OK on success
* SBI_EINVAL on invalid parameters
* SBI_ENOMEM on allocation failure
* SBI_EALREADY if the new range overlaps an existing rule
*/
int sbi_virq_route_add_range(struct sbi_domain *dom, u32 first,
u32 count);

/*
* Lookup destination domain for a given HWIRQ.
*
* @hwirq:
* Incoming host HWIRQ number.
*
* Return:
* Pointer to destination domain. If no rule matches, returns
* &root.
*/
struct sbi_domain *sbi_virq_route_lookup_domain(u32 hwirq);

Courier API

Public APIs for per-(domain,hart) pending queue couriering:

/*
* Enqueue a VIRQ for the destination domain on the current hart.
*
* @c:
* Courier binding containing:
* - c->dom : destination domain
* - c->chip : irqchip device pointer
* - c->virq : VIRQ number
*
* Return:
* SBI_OK on success
* SBI_EINVAL on invalid parameters
* SBI_ENODEV if per-(domain,hart) state is not available
* SBI_ENOMEM if queue is full
*/
int sbi_virq_enqueue(struct sbi_virq_courier_binding *c);

/*
* Pop the next pending VIRQ for the current domain on the current hart.
*
* Return:
* VIRQ_INVALID if none pending or state not available
* otherwise a VIRQ number (zero is legal)
*/
u32 sbi_virq_pop_thishart(void);

/*
* Complete a previously couriered VIRQ for the current domain/hart.
*
* @virq:
* VIRQ to complete.
*/
void sbi_virq_complete_thishart(u32 virq);

/* Return to previous domain if a VIRQ-driven switch is pending. */
void sbi_virq_return_to_prev_if_needed(void);


/*
* Courier handler intended to be registered by host irqchip driver.
*
* @hwirq:
* Incoming host HWIRQ number asserted on the irqchip.
*
* @opaque:
* Point to a valid struct sbi_virq_courier_ctx, which provides the
* irqchip device pointer used for mapping and mask/unmask.
*
* Return:
* SBI_OK on success
* SBI_EINVAL on invalid parameters
* Other SBI_E* propagated from mapping or enqueue
*/
int sbi_virq_courier_handler(u32 hwirq, void *opaque);