You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
69 lines
9.1 KiB
69 lines
9.1 KiB
\chapter{Design}
|
|
\label{chap:design}
|
|
|
|
% Ist das zentrale Kapitel der Arbeit. Hier werden das Ziel sowie die
|
|
% eigenen Ideen, Wertungen, Entwurfsentscheidungen vorgebracht. Es kann
|
|
% sich lohnen, verschiedene Möglichkeiten durchzuspielen und dann
|
|
% explizit zu begründen, warum man sich für eine bestimmte entschieden
|
|
% hat. Dieses Kapitel sollte - zumindest in Stichworten - schon bei den
|
|
% ersten Festlegungen eines Entwurfs skizziert werden.
|
|
% Es wird sich aber in einer normal verlaufenden
|
|
% Arbeit dauernd etwas daran ändern. Das Kapitel darf nicht zu
|
|
% detailliert werden, sonst langweilt sich der Leser. Es ist sehr
|
|
% wichtig, das richtige Abstraktionsniveau zu finden. Beim Verfassen
|
|
% sollte man auf die Wiederverwendbarkeit des Textes achten.
|
|
|
|
% Plant man eine Veröffentlichung aus der Arbeit zu machen, können von
|
|
% diesem Kapitel Teile genommen werden. Das Kapitel wird in der Regel
|
|
% wohl mindestens 8 Seiten haben, mehr als 20 können ein Hinweis darauf
|
|
% sein, daß das Abstraktionsniveau verfehlt wurde.
|
|
|
|
In this chapter we design a class interface for use as a general purpose cache. We will present challenges and then detail the solutions employed to face them, finalizing the architecture with each step. Details on the implementation of this blueprint will be omitted, as we discuss a selection of relevant aspects in Chapter \ref{chap:implementation}. We also shortly touch the subject of \gls{dsa} usage. \par
|
|
|
|
\section{Cache Design} \label{sec:design:cache}
|
|
|
|
The task of prefetching is somewhat aligned with that of a cache. As a cache is more generic and allows use beyond \gls{qdp}, the decision was made to address the prefetching in \gls{qdp} by implementing an offloading \texttt{Cache}. Henceforth, when referring to the provided implementation, we will use \texttt{Cache}. \par
|
|
|
|
The interface of \texttt{Cache} must provide three basic functions: (1) requesting a memory block to be cached, (2) accessing a cached memory block and (3) synchronizing cache with the source memory. The latter operation comes in to play when the data that is cached may also be modified, necessitating an update either from the source or vice versa. Due various setups and use cases for this cache, the user should also be responsible for choosing cache placement and the copy method. As re-caching is resource intensive, data should remain in the cache for as long as possible. We only flush entries, when lack of free cache memory requires it. \par
|
|
|
|
\begin{figure}[h!tb]
|
|
\centering
|
|
\includegraphics[width=0.9\textwidth]{images/uml-cache-and-cachedata.pdf}
|
|
\caption{Public Interface of \texttt{CacheData} and \texttt{Cache} Classes. Colour coding for thread safety. Grey denotes impossibility for threaded access. Green indicates full safety guarantees only relying on atomics to achieve this. Yellow may use locking but is still safe for use. Red must be called from a single threaded context.}
|
|
\label{fig:impl-design-interface}
|
|
\end{figure}
|
|
|
|
\subsection{Interface}
|
|
|
|
To facilitate rapid integration and alleviate developer workload, we opted for a simple interface. Given that this work primarily focuses on caching static data, we only provide cache invalidation and not synchronization. The \texttt{Cache::Invalidate} function, given a memory address, will remove all entries for it from the cache. The other two operations, caching and access, are provided in one single function, which we shall henceforth call \texttt{Cache::Access}. This function receives a data pointer and size as parameters and takes care of either submitting a caching operation if the pointer received is not yet cached or returning the cache entry if it is. The user retains control over cache placement and the assignment of tasks to accelerators through mechanisms outlined in \ref{sec:design:accel-usage}. This interface is represented on the right block of Figure \ref{fig:impl-design-interface} labelled \enquote{Cache} and includes some additional operations beyond the basic requirements. \par
|
|
|
|
Given the asynchronous nature of caching operations, users may opt to await their completion. This proves particularly beneficial when parallel threads are actively processing, and the current thread strategically pauses until its data becomes available in faster memory, thereby optimizing access speeds for local computations. \par
|
|
|
|
To facilitate this process, the \texttt{Cache::Access} method returns an instance of an object referred to as \texttt{CacheData}. Figure \ref{fig:impl-design-interface} documents the public interface for \texttt{CacheData} on the left block labelled as such Invoking \texttt{CacheData::GetDataLocation} provides access to a pointer to the location of the cached data. Additionally, the \texttt{CacheData::WaitOnCompletion} method is available, designed to return only upon the completion of the caching operation. During this period, the current thread will sleep, allowing unimpeded progress for other threads. To ensure that only pointers to valid memory regions are returned, this function must be called in order to update the cache pointer. It queries the completion state of the operation, and, on success, updates the cache pointer to the then available memory region. \par
|
|
|
|
\subsection{Cache Entry Reuse} \label{subsec:design:cache-entry-reuse}
|
|
|
|
When multiple consumers wish to access the same memory block through the \texttt{Cache}, we face a choice between providing each with their own entry or sharing one for all consumers. The first option may lead to high load on the accelerator due to multiple copy operations being submitted and also increases the memory footprint of the cache. The latter option, although more complex, was chosen to address these concerns. To implement this, the existing \texttt{CacheData} will be extended in scope to handle multiple consumers. Copies of it can be created, and they must synchronize with each other for \texttt{CacheData::WaitOnCompletion} and \texttt{CacheData::GetDataLocation}. This is illustrated by the green markings, indicating thread safety guarantees for access, in Figure \ref{fig:impl-design-interface}. \par
|
|
|
|
\subsection{Cache Entry Lifetime} \label{subsec:design:cache-entry-lifetime}
|
|
|
|
Allowing multiple references to the same entry introduces concerns regarding memory management. The allocated block should only be freed when all copies of a \texttt{CacheData} instance are destroyed, thereby tying the cache entry's lifetime to the longest living copy of the original instance. This ensures that access to the entry is legal during the lifetime of any \texttt{CacheData} instance. Therefore, deallocation only occurs when the last copy of a \texttt{CacheData} instance is destroyed. \par
|
|
|
|
\subsection{Usage Restrictions}
|
|
|
|
The cache, in the context of this work, primarily handles static data. Therefore, two restrictions are placed on the invalidation operation. This decision results in a drastically simpler cache design, as implementing a fully coherent cache would require developing a thread-safe coherence scheme, which is beyond the scope of our work. \par
|
|
|
|
Firstly, overlapping areas in the cache will result in undefined behaviour during the invalidation of any one of them. Only the entries with the equivalent source pointer will be invalidated, while other entries with differing source pointers, which due to their size, still cover the now invalidated region, will remain unaffected. At this point, the cache may or may not continue to contain invalid elements. \par
|
|
|
|
Secondly, invalidation is a manual process, requiring the programmer to remember which points of data are currently cached and to invalidate them upon modification. No ordering guarantees are provided in this situation, potentially leading to threads still holding pointers to now-outdated entries and continuing their progress with this data. \par
|
|
|
|
Due to its reliance on libnuma for memory allocation, \texttt{Cache} is exclusively compatible with systems where this library is available. It is important to note that Windows platforms use their own API for this purpose, which is incompatible with libnuma, rendering the code non-executable on such systems \cite{microsoft:numa-malloc}. \par
|
|
|
|
\section{Accelerator Usage}
|
|
\label{sec:design:accel-usage}
|
|
|
|
Compared with the challenges of ensuring correct entry lifetime and thread safety, the application of \gls{dsa} for the task of duplicating data is relatively straightforward, thanks in part to \gls{intel:dml} \cite{intel:dmldoc}. Upon a call to \texttt{Cache::Access} and determining that the given memory pointer is not present in the cache, work is submitted to the accelerator. However, before proceeding, the desired location for the cache entry must be determined which the user-defined cache placement policy function handles. Once the desired placement is obtained, the copy policy then determines, which nodes should participate in the copy operation. Following Section \ref{subsection:dsa-hwarch}, this is equivalent to selecting the accelerators. The copy tasks are distributed across the participating nodes. As the choice of cache placement and copy policy is user-defined, one possibility will be discussed in Chapter \ref{chap:implementation}. \par
|
|
|
|
%%% Local Variables:
|
|
%%% TeX-master: "diplom"
|
|
%%% End:
|