Browse Source

write section about dsa hw/sw architecture

master
Constantin Fürst 12 months ago
parent
commit
1f055d84ed
  1. BIN
      thesis/bachelor.pdf
  2. 50
      thesis/content/20_state.tex

BIN
thesis/bachelor.pdf

50
thesis/content/20_state.tex

@ -37,16 +37,47 @@ Introduced with the 4th generation of Intel Xeon Scalable Processors \cite{intel
\section{Architecture} \section{Architecture}
To be able to optimally utilize the Hardware, knowledge of its workings is required to make educated decisions. Therefore, this section describes both the workings of the \gls{dsa} engine itself (referred to as internal architecture) and the way it integrates with the rest of the processor (external architecture). All statements are based on Chapter 3 of the Architecture Specification by Intel \cite{intel:dsaspec}. \par
To be able to optimally utilize the Hardware, knowledge of its workings is required to make educated decisions. Therefore, this section describes both the workings of the \gls{dsa} engine itself and the view that is presented through software interfaces. All statements are based on Chapter 3 of the Architecture Specification by Intel \cite{intel:dsaspec}. \par
As the accelerator is directly integrated into the CPU, a system with multiple processors, as it is common in servers, will also have multiple \gls{dsa}s. These engines are accessible via the CPUs IO-Fabric as a PCIe device, and submit memory requests through this BUS directly to the \gls{iommu}. Configuration of the device on a low level is done through memory-mapped I/O registers that are set in the \gls{bar}, which is also used to set the location of work submission portals. Through these portals, the so-called work descriptors are handed over to the device for processing. \par
\subsection{Hardware Architecture}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{images/dsa-internal-block-diagram.png}
\caption[\gls{dsa} Internal Block Diagramm]{Taken from Figure 1a of \cite{intel:analysis}}
\label{fig:dsa-internal-block}
\end{figure}
The accelerator is directly integrated into the Processor and attaches via the I/O fabric interface over which all communication is conducted. Over this interface, it is accessible as a PCIe device. Configuration therefore is done through memory-mapped registers set in the devices \gls{bar}. Through these, the devices layout is defined and memory pages for work submission are set. In a system with multiple processing nodes, there may also be one \gls{dsa} per node.\par
To satisfy different use cases, as already mentioned, the layout of the \gls{dsa} may be software-defined. The structure is made up of three components, namely \gls{dsa:wq}s, \gls{dsa:engine}s and \gls{dsa:group}s. \gls{dsa:wq}s provide the means to submit tasks to the device and will be described in more detail shortly. An \gls{dsa:engine} is the processing-block that connects to memory and performs the described task. Using \gls{dsa:group}s, \gls{dsa:engine}s and \gls{dsa:wq}s are tied together. This means, that tasks from one \gls{dsa:wq} may be processed from multiple \gls{dsa:engine}s and that vice-versa, depending on the configuration. This flexibility is achieved through the Group Arbiter which connects the two components and acts according to the setup. \par
A \gls{dsa:wq} is accessible through so-called portals, which are mapped memory regions. Submission of work is done by writing a descriptor to one of these portals. A descriptor is 64 Byte in size and may contain one specific task (task descriptor) or the location of a task array in memory (batch descriptor). Through these portals, the submitted descriptor reaches a queue of which there are two types with different submission methods and use cases. The \gls{dsa:swq} is intended to provide synchronized access to multiple processes and each group may only have one attached. A \gls{pcie-dmr}, which guarantees implicit synchronization, is generated via \gls{x86:enqcmd} and communicates with the device before writing. This results in higher submission cost, compared to the \gls{dsa:dwq} to which a descriptor is submitted via \gls{x86:movdir64b}. The \gls{dsa:dwq} is therefore more performant but may require access control mechanisms and may only be accessed by one process at a time. \par
To handle the different descriptors, each \gls{dsa:engine} has two internal execution paths. One for a task and the other for a batch descriptor. Processing a task descriptor is straightforward, as all information required to complete the operation are contained within. For a batch, the \gls{dsa} first reads the batch descriptor, then fetches all task descriptors for the batch from memory and processes them. An \gls{dsa:engine} can also trigger a page fault when trying to access an unloaded page and wait on its completion, if configured to do so. Otherwise, an error will be generated in this scenario. \par
Ordering of operations is only guaranteed for a configuration with one \gls{dsa:wq} and one \gls{dsa:engine} in a \gls{dsa:group} when submitting exclusively batch or task descriptors but no mixture. Even then, only write-ordering is guaranteed, meaning that \enquote{reads by a subsequent descriptor can pass writes from a previous descriptor} \cite[30]{intel:dsaspec}. A different issue arises, should an operation fail: the \gls{dsa} will continue to process the following descriptors. Care must therefore be taken with read-after-write scenarios, either by waiting for a successfull completion before submitting the dependant, inserting a drain descriptor for tasks or setting the fence flag for a batch. The latter two methods tell the processing engine that all writes must be commited and, in case of the fence in a batch, abort on previous error. \par
An important aspect of modern computer systems is the separation of address spaces through virtual memory. The \gls{dsa} must therefore handle address translation, as a process submitting a task will not know the physical location in memory which causes the descriptor to contain virtual values. For this, the \gls{dsa:engine} communicates with the \gls{iommu} and \gls{atc} to perform this operation. For this, knowledge about the submitting processes is required, and therefore each task descriptor has a field for the \gls{x86:pasid} which is filled by the \gls{x86:enqcmd} instruction for a \gls{dsa:swq} or set statically after a process is attached to a \gls{dsa:dwq}. \par
The completion of a descriptor may be signaled through a completion record and interrupt, if configured so. For this, the \gls{dsa} \enquote{provides two types of interrupt message storage: (1) an MSI-X table, enumerated through the MSI-X capability; and (2) a device-specific Interrupt Message Storage (IMS) table} \cite[27]{intel:dsaspec}. \par
\subsection{Software View}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{images/dsa-software-architecture.png}
\caption[\gls{dsa} Internal Block Diagramm]{Taken from Figure 1b of \cite{intel:analysis}}
\label{fig:dsa-software-arch}
\end{figure}
Due to efforts by intel programmers, since Linux Kernel 5.10 \cite[Installation Instructinos]{intel:dmldoc}, there exists a driver for the \gls{dsa} \cite{intel:idxd-driver-repo} which has no counterpart in the Windows OS-Family \cite[Installation Instructinos]{intel:dmldoc}, meaning code developed without an alternative path will not work there. To interface with the driver and perform configuration operations, intels libaccel-conf \cite{intel:libaccel-config-repo} user space toolset may be used which provides a command-line interface and can read configuration files to set up the device as described previously. After successful configuration, each \gls{dsa:wq} is exposed as a character device by \texttt{mmap} of the associated portal \cite[3]{intel:analysis}. \par
Given the file permissions, it would now be possible for a process to submit work to the \gls{dsa} via either \gls{x86:movdir64b} or \gls{x86:enqcmd} instructions, providing the descriptors by manually configuring them. This, however, is quite cumbersome, which is why Intels Data Mover Library \cite{intel:dmldoc} exists. With some limitations (like lacking support for \gls{dsa:dwq}s) this library presents a high-level interface that takes care of creation and submission of descriptors, some error handling and reporting. Thanks to the high-level-view the code may choose a different execution path at runtime which allows the memory operations to either be executed in hardware (on a \gls{dsa}) or in software (using equivalent instructions provided by the library) which makes code based upon it automatically compatible with systems that do not provide hardware or software support. \par
\begin{itemize} \begin{itemize}
\item possibly more performance with multiple engines per group (and single WQ) to cover over high latency address translation \cite[25]{intel:dsaspec}
\item drain descriptor / drain command signals completion of preceding descriptors for fencing in non-batch submissions, in batches the ``fence flag'` can be used to ensure ordering, failures before a fence will lead to the following descriptors being aborted \cite[30]{intel:dsaspec}, \texttt{sfence} or \texttt{mfence} should be executed before pushing drain descriptor \cite[32]{intel:dsaspec} \item drain descriptor / drain command signals completion of preceding descriptors for fencing in non-batch submissions, in batches the ``fence flag'` can be used to ensure ordering, failures before a fence will lead to the following descriptors being aborted \cite[30]{intel:dsaspec}, \texttt{sfence} or \texttt{mfence} should be executed before pushing drain descriptor \cite[32]{intel:dsaspec}
\item cache control flag in descriptor controls whether writes are directed to cache or to memory \cite[31]{intel:dsaspec} effects on copy from DRAM > HBM unknown \item cache control flag in descriptor controls whether writes are directed to cache or to memory \cite[31]{intel:dsaspec} effects on copy from DRAM > HBM unknown
\item shared WQ receive work via 'PCIe deferrable memory write request' to the portal which removes the need for synchronization of submissions but can cost more due to the communication overhead of posting a write request and waiting for it to be signalled 'completed' \cite[23]{intel:dsaspec}
\item dedicated WQ are configured by the driver with a specified PASID for address translation and can not be shared by multiple clients \cite[24]{intel:dsaspec}
\end{itemize} \end{itemize}
\section{HW/SW Setup} \section{HW/SW Setup}
@ -59,17 +90,10 @@ Setup Requirements:
\item VT-d enabled \item VT-d enabled
\item limit CPUPA to 46 Bits disabled \item limit CPUPA to 46 Bits disabled
\item IOMMU enabled \item IOMMU enabled
\item kernel with iommu and \gls{dsa} support
\item kernel with iommu and idxd driver support
\item kernel option "intel\_iommu=on,sm\_on" \item kernel option "intel\_iommu=on,sm\_on"
\end{itemize} \end{itemize}
Software Configuration:
Describe intel accel-config and how it works with back reference to architecture.
Software Access:
Explain how a piece of software may access the \gls{dsa}/WQ, how the drivers and dsa libraries enable this
and also how access policies are enforced.
\section{Microbenchmarks} \section{Microbenchmarks}
\todo{provide microbenchmarks with multiple configurations and for many use cases} \todo{provide microbenchmarks with multiple configurations and for many use cases}

Loading…
Cancel
Save