There are lots of compiler optimizations, but which are particularly designed for object-oriented languages? Since OOP is not well defined, let us assume we are talking about Java.
How does Java look different to C, once you are in a compiler's intermediate language? There are a lot less differences than one might have thought. For example, classes, interfaces, vtables, and reflection can be modelled by structs. Even the more advanced stuff like closures, generics, inner and anonymous classes are mere structs. This means all those features are handled before the optimization phase and need no consideration here.
Once you realize that object-oriented stuff is mostly syntactic sugar, which is handled in the frontend before the optimizations are applied, you wonder if there even are object-oriented optimizations? It turns out that foremost OO programs have a different style:
- More indirections due to vtables aka dynamic dispatch. While C programmers can use function pointers as well, they rarely do, since it is cumbersome. In Java nearly every call requires a pointer dereference.
- More functions with less code. OO programmers are advised to write small methods.
- More function/method calls overall, since dynamic dispatch is prefered to switch. It is considered good style and helps to keep the code modular.
- More memory allocations, since objects are always passed by reference.
This leads to two basic optimizations, which are probably not worth it for C programs, but can provide tremendous performance boosts for Java.
Heap-to-Stack Allocation
Building on escape analysis, objects might be allocated on the stack instead of the heap. Handling stack frames is more efficient than managing the heap memory. For example, stack objects are freed together with the stack frame in one instruction. Since the stack is accessed quite often, a cache miss has a lower probability.
Removing Dynamic Dispatch
Building mostly on points-to analysis (or its little brother rapid type analysis), this removes one layer of indirection. Instead of dereferencing the vtable and looking up a function pointer, a statically known function is called. This means we do not have to access memory for the vtable, which reduces cache misses. Furthermore, it is now possible to inline the called function, which enables a range of additional (traditional) optimizations.
As far as I know, that's it. Of course, you want all the other traditional optimization as well.