(Cross)-compiler interoperability

Compiler interoperability is a topic not getting much love from developers and vendors. It denotes the ability to compile two source files with different compilers and link the resulting object files together to a working executable.

A quick google search yields an old Dr.Dobb’s article which overall is still pretty relevant today. Specially the two paragraphs about C and C++ interoperability requirements are of interest because they concisely show why C++ is so much harder to be interoperable with than C.

In a nutshell C is interoperable across multiple compilers if the following two assumptions are met:

  • The byte representation of types is identical.
    // This goes for built-in types
    unsigned long l;
    // As well as for self defined ones (padding or not)
    struct S {
      unsigned int i;
      char c;
      unsigned short s;
  • The calling conventions are identical.
    // Which registers are used to pass and return?
    // E.g. r0  a
    //      r1  b
    //      r0  return
    int foo(int a, int b) {
      return a + b;

This simplicity is one of the reasons why C is so ubiquitous and pretty much every other language out there features some kind of interface to it.

C++ on the other hand comes with three additional requirements:

  • Name-mangling must be identical.
    // The symbol foo gets mangled to _Z3fooii
    int foo(int a, int b) {
      return a + b;

    A necessity that comes with function overloading, since defining another function foo with a different signature is perfectly legal.

  • The internal convention to resolve virtual function addresses (vtables) must be identical.
    // Symbols and layout of the internal vptr must match
    struct Base {
      virtual void foo() {}
    struct Derived : Base {
      virtual void foo() override {}

    A necessity that comes with classic runtime polymorphism.

  • C++ language support must be compatible. This comprises features like exception handling, RTTI, operator new/delete and so forth. From an embedded perspective this is much less an issue than the other two requirements since none of those features are used much on small platforms anyhow.

Ironically getting interoperability to work on embedded platforms is in my experience more difficult than on X86-64, but lets start with a little example before we move on.

x86-64 example

In our example we’d like to statically link an object file from one compiler to the object file of another and the other way round. I’ll be focusing on GCC and Clang since those are the ones I’ve available.

Out library function foo which compiles to libextern_gcc.o or libextern_clang.o

// extern.cpp
#include <iostream>

void foo() {
  std::cout << "foo\n";

And main which calls foo

// main.cpp
extern void foo();

int main() {
  return 0;

Compiling the library with GCC and main with Clang

g++ -c extern.cpp -o libextern_gcc.o
ar rcs libextern_gcc.a libextern_gcc.o
clang++ -o main main.cpp -L$(pwd) -lextern_gcc

And the other way round

clang++ -c extern.cpp -o libextern_clang.o
ar rcs libextern_clang.a libextern_clang.o
g++ -o main main.cpp -L$(pwd) -lextern_clang

Well, looks like everything works as expected. So where can we actually run into problems?

  1. Link-time optimization won’t work since using the -flto flag produces compiler specific intermediate format object files. Those object files are not compatible with each other since for example GCC uses GIMPLE and Clang uses LLVM bitcode as intermediate format. I guess MSVC and other compilers (ICC and what not) use different formats as well…?
    GCC offers to create object files that contain both, intermediate format and normal object code. This can be achieved by passing -flto -ffat-lto-objects to the compiler. The resulting files can be linked with and without LTO. From what I know Clang does not have that option.
  2. Linking different standard library implementations and C++ runtimes won’t work. In our example we could have told Clang to use libc++ instead of libstdc++ by passing -stdlib=libc++ as additional argument. This will most likely end in some undefined reference during linking.

The reasons why neither of those things aren’t much of a concern on x86-64 are that

  1. LTO is mostly about reducing binary-size which tends not to of be the highest priority
  2. When was the last time you have seen someone manually link an application for x86-64?


Specially the last point builds a good transition to the topic of cross-compilation. When cross-compiling you’re usually way more involved in the linking process. There is no obvious default choice for “include all the stuff I need” like there is on x86-64. Depending on your processor and requirements you might link a math library which uses hardware- or software-floating point. Depending on your processors sub-type you might need a floating point library for single or double precision. Maybe you need proper printf support or maybe you don’t need a standard library at all?

GCC tries to answer all those questions for you (or at least offer assistance) by having the spec-file system. But in my opinion that obfuscates things even further. Instead of passing all linker flags by hand you just pass a spec-file without really knowing whats going on under the hood. So to see whats actually happening you either end up running the compiler with -verbose or you’re trying to reverse-engineer the spec-files which come with practically zero documentation. Clang is not able to understand spec-files and most likely never will. The reason why I don’t see that happening is the architectural difference between GCC and Clang. While GCC is always compiled to produce binaries for a single target, Clang is a native cross-compiler in itself. For example there is a GCC version for ARM, one for MSP430, one for AVR and so on while for Clang there is just… well Clang.

Apart from linking issues this difference has another pitfall which you probably wouldn’t have thought about. All compilers come with a set of predefined macros which among other things handle the underlying types of all the typedefs defined in cstdint and cstddef. You can take a look at those predefines by running e.g. gcc -dM -E - < /dev/null.

Since GCC gets compiled for a certain target platform those defines change with the version of GCC you’re using. So for example GCC for x86-64 gives me

gcc -dM -E - < /dev/null | grep SIZE_T
#define __SIZEOF_SIZE_T__ 8
#define __SIZE_TYPE__ long unsigned int

whereas GCC for ARM gives me

arm-none-eabi-gcc -dM -E - < /dev/null | grep SIZE_T
#define __SIZEOF_SIZE_T__ 4
#define __SIZE_TYPE__ unsigned int

Since Clang does not come with target specific binaries its predefines change according to the target triple input. This so called target triple are basically a bunch of architecture flags. Clang for x86-64 outputs

clang -dM -E - < /dev/null | grep SIZE_T
#define __SIZEOF_SIZE_T__ 8
#define __SIZE_TYPE__ long unsigned int

while Clang with arm-none-eabi target outputs

clang -dM -E --target=arm-none-eabi -march=armv7e-m -mcpu=cortex-m4 -mthumb -mlittle-endian -mfloat-abi=hard -mfpu=fpv4-sp-d16 - < /dev/null | grep SIZE_T
#define __SIZEOF_SIZE_T__ 4
#define __SIZE_TYPE__ unsigned int

The problem with this is that even the slightest difference between those defines results in different built-in types being used for things like size_t. If just a single one of those types differs and is used in function calls inside your code base across different compilers you’ll end up with a whole lot of undefined reference errors during linking. And yes… I’ve spent hours trying to figure out where it went south…

arm-none-eabi-gcc -dM -E - < /dev/null | grep INT32_TYPE
#define __INT32_TYPE__ long int
clang -dM -E --target=arm-none-eabi -march=armv7e-m -mcpu=cortex-m4 -mthumb -mlittle-endian -mfloat-abi=hard -mfpu=fpv4-sp-d16 - < /dev/null | grep INT32_TYPE
#define __INT32_TYPE__ int


Long story short there are two take-aways for making cross-compilers interoperable

  1. Make sure you link the right library versions (of -lm, -lgcc, -lc and so on)
  2. Make sure your compiler predefines are compatible