If you ask what an abstraction is, you might get the answer: Giving names to things. This definition is ok, but not as accurate as I would like. As a counter-argument, this article describes a method of abstraction which explicitly does not gives names to things.
The name-giving method is a dominant one. In Java you use classes. In Haskell you use type classes. In Rust you use traits, which are actually much like Haskell's type classes. In C++ you might get concepts, but you already have classes as well. All of those specify a name for the abstraction.
Andrei Alexandrescu proposes Design by Introspection, which inspects types instead of (statically) dispatching by name. He invented this, when he built allocators for the D standard library. As a C++ programmer, you probably know where to use allocators. The rest of the world probably does not, because they get no choice. Here is what C++ programmers can do:
std::vector<int> v1; // an integer array // to allocate memory, use something else than malloc std::vector<int, tbb::scalable_allocator<T>> v2;
The first statement uses the default allocator, but the second statement uses tbb::scalable_allocator instead. This might improve performance.
How many different allocators are there? A lot, because there are a lot of possible design choices: What alignment to use? Can you choose alignment per allocation? Is memory contiguous or not? Is it thread-local or shared? Can you free? Must you free or is there a garbage collector? Can you reallocate? Can you change the size of objects? Can you find the start address of arbitrary pointers?
With all those decisions, the amount of possible combinations is huge. Coming up with names for each combination is neither feasible or helpful. Instead, just look at the parameters and adapt.
Let us have a closer look at std.experimental.allocator. It is an option to customize memory allocation. One example from the documentation is an application which allocates a lot of memory chunks of 8 bytes. We assume the default allocator is not particularly fast with it, but we could use a free-list allocator to improve speed.
import std.experimental.allocator.building_blocks.free_list : FreeList; theAllocator = allocatorObject(FreeList!8());
The changes the default allocator theAllocator for the current thread. The allocatorObject wrapper is necessary, to turn the compile-time properties derived from the type FreeList!8 into something which we can query dynamically. For now, we still give names to things "Freelist", but the building_blocks package hints at the interesting parts.
Here comes another example from the documentation using a trick from jemalloc: Use different free-list allocators for different allocation sizes.
alias FList = FreeList!(GCAllocator, 0, unbounded); alias A = Segregator!( 8, FreeList!(GCAllocator, 0, 8), 128, Bucketizer!(FList, 9, 128, 16), 256, Bucketizer!(FList, 129, 256, 32), 512, Bucketizer!(FList, 257, 512, 64), 1024, Bucketizer!(FList, 513, 1024, 128), 2048, Bucketizer!(FList, 1025, 2048, 256), 3584, Bucketizer!(FList, 2049, 3584, 512), 4072 * 1024, AllocatorList!( () => BitmappedBlock!(GCAllocator, 4096)(4072 * 1024)), GCAllocator ); // example use: A tuMalloc; auto b = tuMalloc.allocate(500);
In the first line, we define a free-list allocator FList, which gets its memory from the garbage collector and works for any allocation size.
Then we construct the actual allocator A. At the top we use Segregator, which dispatches depending on the allocation size. For sizes 0 to 8 bytes, we use a free-list allocator again. For sizes 9 to 128 bytes, we use Bucketizer, which also dispatches depending on the allocation size, but a little differently: The parameters 9, 128, and 16 are the minimum, maximum, and step size. The first bucket is for allocation sizes between 9 and 9+16 bytes. The second bucket is for 26 to 26+16 bytes. Up until 128 bytes. The FList parameter is the allocator to use for each bucket. This scheme continues until allocation size 3584 bytes with increasing step sizes.
For allocation size above 3584 bytes up to 4GB, Segregator uses an AllocatorList, which lazily creates BitmappedBlock allocators on demand. BitmappedBlock uses a bit map to track allocations of 4KB blocks. This is an efficient and compact method. We avoid its weakness (lots of small allocations and thus fragmentation) because we use different allocators for that.
Finally, if you really allocate a chunk of memory larger than 4GB, we fall back to using the garbage collector.
What is the actual trick of Design by Introspection here? We use GCAllocator, the garbage collector, as the source of memory. Since this allocator provides a method to free memory, our compose allocator A also can provide it. If we would switch to a different allocator, A would not have a free method. Andrei achieves it by using D's static-if. Here is the code from Bucketizer:
static if (hasMember!(Allocator, "deallocate")) bool deallocate(void[] b) { if (!b.ptr) return true; if (auto a = allocatorFor(b.length)) { a.deallocate(b.ptr[0 .. goodAllocSize(b.length)]); } return true; }
Only if the parent allocator Allocator has a member called "deallocate", the deallocate method is declared. It chooses the appropriate bucket with its allocator and calls the deallocate method there.
Since memory allocation is a performance-critical code we cannot use dynamic checks at runtime. Especially, since the use case for this library are customized allocators. In theory, C++ should be able to provide similar functionality, but in practice it is probably too ugly.
This technique Design by Introspection turned out to be useful in surprisingly many cases. For example, D promotes the ranges concept and has convenience functions in its standard library to deal with ranges, like drop to remove a certain number of elements from the front or retro to reverse a range. They all introspect the range parameter type and chose different implementations. For example, for a random access range like an array, a.drop(5) can return the slice a[5..$], which is cheap. In contrast, for an input range like a linked list, a.drop(5) must follow the references in the list five times instead.
Another popular use case for these techniques is serialization, where D needs less annotations than other languages. There might be more fascinating cases for the Design by Introspection technique.
The actual topic here is names and abstractions. This article presented a technique which is not about names at all, but still helps to abstract. It explicitly avoids names, because giving names would require an overwhelming number of names for all the combinations. Hopefully, this has nailed my point: Abstraction is NOT about giving names to things.
---
I used parts of the article for my my TopConf talk "Abstractions: From C to D".