Dalvik vs Mono


<Edit 2>I managed to get Mono natively on top of Android's libc, Bionic! This resulted in another 50% or so speed improvement; I am guessing this is because Bionic is compiled for the Thumb instruction set, which is supposedly faster. The post has been updated once again with the new results</Edit>

<Edit 1>I've updated the performance scores with Mono as pulled from the trunk (I was using 2.0.1) and using eglib. It's gotten faster and uses even less memory.</Edit>

<Edit 0>The original article was only about Dalvik vs Mono, and begged the question, why didn't Google leverage Mono's existing open source technology? However, I found that Sun has an ARM version of their Java Runtime environment available as a 90 day trial. So I also ran the tests against that as well. Sun's ARM JRE is around as fast Mono, but at a greater memory cost. However, the Sun Java Runtime Environment is not open source, unlike Mono. So, it is not a viable runtime for the Open Handset Alliance's platform.</Edit>

Google has had a little egg on its face recently. They wrote up a 40 page comic touting the awesomeness Chrome V8's performance, only to be thoroughly trounced by TraceMonkey and Squirrelfish Extreme in comparative benchmarks (It's ok guys, I still prefer Chrome as my browser though).

So after that embarrassing showing, I was naturally a little skeptical about the supposed benefits that Dalvik provided for mobile devices. To better understand Dalvik's goals and inner workings, I watched an hour long presentation starring its creator, Dan Bornstein. The two line summary is that Dalvik is designed to minimize memory usage by maximizing shared memory. The memory that Dalvik is sharing are the common framework dex files and application dex files (dex is the byte code the Dalvik interpreter runs).

The first thing that bugged me about this design, is that sharing the code segments of dex files would be completely unnecessary if the applications were purely native. In Linux, the code segments of libraries are shared by all processes anyways. So, realistically, there is no benefit in doing this. In fact, Mono's managed assemblies also reap these same benefits of multiple processes sharing the same code segment in memory.

The second thing that bugged me about this presentation was Dan starts out talking about how battery life is not scaling with Moore's law, which is certainly true. But if the battery is the primary constraint on the device, why is Dalvik so concerned with minimizing memory usage? I am by no means a VM design guru, as I'm sure he is, but I can say the following with certainty:

  • Total memory usage has absolutely no impact on battery life. The chips are being powered regardless of how much of their memory is being used. Increasing the total memory available on a device will also only cause marginal increase in battery drain. Memory is not something that taxes the battery compared to other components of the system.
  • Battery life is primarily affected by how much you tax the processor and the other hardware components of the device: especially the use of 3G/EDGE and WiFi radios.
  • Interpreting byte code will tax the processor and thus the battery much more than native/JIT code.
  • Modern (Dream/iPhone comparable) hardware running Windows Mobile is rarely memory constrained, and they don't have a fancy memory sharing runtime. Memory constraints (in my mobile experience) become an issue on Windows Mobile when several applications are running at the same time. And this problem can be solved at the application framework level; such as how the Android Application life cycle is implemented. If all applications can suspend and restore at the system's whim, then memory consumption is trivialized. However, the application framework is not tied to the Dalvik runtime. (I.e., it can be ported to work with native code, Mono/.NET, JVM, whatever)
  • Generally in applications, the code's memory footprint is trivial compared to the application data memory footprint (images, text, video, etc). Dalvik is overly concerned with optimizing the memory size of dex files and sharing memory. Dan's presentation did a comparison between the Browser's Java .class files versus the Dalvik .dex files (the .dex file is around 250k, around half the size of the .class files). My reaction to that is whoopity-shit. What happens when you start up the Browser? You head to your favorite webpage, it loads up a half dozen images which decompress to a raw R5G6B5 format, which then clocks in at several megabytes. That really trivializes the few hundred kilobytes that Dalvik is trying to save.

This leads me to believe that Google committed a classic performance optimization mistake: they are optimizing an aspect of the system that is trivial in the grand scheme. To poke a little nerdy fun at a portion of Dan's presentation, it is akin to tweaking your for loops to iterate downwards for better performance. And all the while the loops are being used perform an inefficient selection sort.

Regardless, all speculations and theories aside, let's let real world scenarios speak for itself. The T-Mobile G1, aka HTC Dream, has terrible battery life when compared to its siblings of the Windows Mobile variety. (I own or have owned a Dream, Touch, Touch Cruise, and Touch Diamond in the past year)


Runtime Memory Usage

My first test was to create a simple hello world program for both runtimes. Hello World would be printed to the screen, and then the thread would sleep for 30 seconds, allowing me to peek at the process' memory usage.

root     14500 43.0  3.8  14788  3816 pts/0    Rl+  01:35   0:01 ./mono MonoTests.exe ...
root      1605 49.0  4.8  34940  4812 pts/2    Sl+  20:10   0:00 /system/bin/dalvikvm ...
root      4464 64.0  6.7 180388  6740 pts/1    Sl+  18:21   0:02 java -jar JavaTests ...

Ok, so this surprised me a bit. Mono needs around half the memory to start up? Using pmap on the dalvikvm process shows that it is referencing a lot more "base" system libraries than Mono. I suppose at the end of the day, it doesn't matter, because on Linux, libraries are loaded and shared between processes. I also took a pmap snapshot of Mono and Java for those interested (Sun's ARM JRE is quite bloated...).


Performance Comparisons

I'll be the first to admit, these comparisons aren't fair at all. No interpreter will ever run as fast as native code. But, I'll test it anyway. These tests purposely steer clear of the calling into underlying libraries. The goal is to benchmark the memory usage and performance of the runtimes themselves by way of very simple applications. Click here to view the code for the Java and the C# tests.

Selection Sort Test

This test creates a reverse sorted array of integers between 0 and 1000 and sorts them into increasing order (and does it 10 times, excluding the results of the first). Lower numbers are better. Results:

  Time (ms)
Dalvik 4668
Mono 411
Java SE for Embedded 895


Class (and Structure) method call Test

This test creates instantiates an array of 10000 FibContainer instances. FibContainer is a simple class:

C# Java

class FibContainer
    int myValue;
    public int Value
            return myValue;
            myValue = value;

    public void Compute(FibContainer previous, FibContainer beforePrevious)
        Value = previous.Value + beforePrevious.Value;


struct FibContainerStruct
    int myValue;
    public int Value
            return myValue;
            myValue = value;

    public void Compute(FibContainerStruct previous, FibContainerStruct beforePrevious)
        Value = previous.Value + beforePrevious.Value;

class FibContainer
    private int mValue;

    public int getValue()
        return mValue;

    public void setValue(int value)
        mValue = value;

    public void Compute(FibContainer previous, FibContainer beforePrevious)
        setValue(previous.getValue() + beforePrevious.getValue());

It then iterates over the array and calculates and stores the Fibonacci series. The test notes 3 things: total memory in use by the runtime after allocating the array, the time to allocate the array, and the time to calculate the Fibonacci series (the method calls are intentional). Note that I also performed this same test on Mono with a feature not available in the Java language: I used a struct instead of a class. Smaller numbers are better in all cases. This test was run 50 times (the first excluded):

  Memory (bytes) Allocation Time (ms) Calculation Time (ms)
Dalvik Class 9817240 5388 3013
Mono Class 7376896 933 167
Mono Struct 2007040 12 107
Java Class 10438176 319 211
C++ Class 1960000 N/A N/A

Equivalent C++ code would allocate the amount of memory shown above. So, as you can see, Mono has around 33% less overhead when allocating classes. It is also around 8% faster at doing those allocations, and the calculation time completely blows Dalvik out of the water. And by way of intelligent usage of structs in Mono, you can leverage near bare metal memory usage. (Not to mention that arrays of structs containing blittable types are themselves blittable. This is very friendly for processor/memory caches. It also provides for easier interaction with native calls, such sending an array of vertices to OpenGL. But I digress.)

Long story short: from my initial, limited, and naive testing, Mono is faster and uses less memory than Dalvik. And it is not even designed to run on mobile devices. So it begs the question, why didn't Google just convert the .class files to CIL and use the Mono runtime? That way they wouldn't have alienated Java developers, would have access to open source Java libraries they so covet, wooed .NET developers, and wouldn't have needed to invent their own sub-par runtime!

I also want to test out the performance of the two Garbage Collectors as well as native function invocation (P/Invoke and JNI), but I'm hungry and will do that in a later post. Until next time friends!


Anonymous said...

I guess it's possible to run sun jvm in G1. It may show very interesting comparison to dalvik and mono...

Koush said...

Ahh, so Sun does have an ARM JRE! I've installed it and run the tests against that as well. It did pretty well!

Markus Kohler said...

IMHO you somehow miss the point.

Dalvik supports isolation of applications in a safe sandboxed JIT environment, which is something Mono does not support, nor does Java(expect maybe a not very famous project by SAP).
For this to work you need to share code, otherwise too much memory would be needed.
You are talking about premature optimization and that comparing a JIT against a bytecode interpreter is meaningless. Who says that it's very important to have a JIT compiler, when most code is executed in native libraries anyway? Any references to reviews of the G1 where performance was considered not to be good enought?

And a JIT compiler has memory overhead because it needs a code cache and supporting data structures. As far as I remember there was a plan to also implement a JIT compiler for dalvik.

Byte code typically needs less memory than machine code.

You may underestimate the size of code versus data.
Just check my post about the memory usage of Eclipse (code)

As far as I know Dalvik can also share data between processes, which is as far as I know not supported by Mono.

danfuzz said...

First and foremost, thanks for the thoughtful analysis. However, I have a few comments:

I think you took some of the statements from the Dalvik talk out of context. Also, of necessity a one-hour talk is going to end up with a lot of details elided.

Regarding battery life, the intent of that comment was with respect to comparitive interpreter performance, but compactness of bytecode actually does impact battery life as well, in terms of being able effectively to hold more code in RAM, thereby avoiding paging.

In addition to sharing the executable memory, Dalvik also shares much of the heap contents, which I think you may have neglected to account for in your analysis.

Regarding the possibility of converting to CIL, that ignores the problem of the class library, which is of considerable size and would have to be ported to a somewhat different object model. I won't claim the library couldn't be made to work -- it's all turing complete after all -- but it would be considerable work for no clear short-term goal.

Finally, one thing to keep in mind with all this is that there were many reasons Dalvik ended up with the shape it took on, only some of which you chose to debate. Although it it good and interesting to debate individual aspects, a true reckoning has to look at the whole picture.

We did the best we could in the time we had, but no doubt there are things we could have done better. And we are certainly not resting on our laurels now. I hope you will find Android and Dalvik still thriving at the end of 2009, with a year's worth of carefully considered improvements.

Warm regards in the new year.

Koush said...


Your statements are not valid in the context of the Android platform, I'm guessing you aren't that familiar with how the application security works on Android.

Security and sandboxing is done at the OS and framework level: every Android application is given its own user ID. Each user id can only access a certain set of system features. For example, the browser is allowed to access the internet, but the alarm clock is not. Applications specify up front in their manifest which security permissions they need.
For example, if you do "ls -l" in the /data/data directory, you can clearly see that each application has it's own user id.
This security model is not implemented or tied to the Dalvik VM, it is done by the Android system. At an operating system level, it is impossible for an application to interfere with another.

And reiterating from my post, code segments in Mono are read/execute only, and shared between processes.

JIT compilation is also generally something that only needs to take place once. The results can then be cached. In fact, Mono has an AOT compiler (Ahead Of Time Compiler) that compiles CIL to a native binary.

So your points may be moot in the context of Android.

Koush said...

Hi Dan, thanks for your comments.

As I said myself in the closing comments, my opinions and "limited" analysis is "naive". So I'm sure I missed various other problems that Dalvik may be trying to address.

You are correct that I neglected to talk about the heap sharing in Dalvik. I had read/heard about that feature, but I'm not well versed enough to talk about it. :) Do you mind explaining the scenarios where that would be useful? (Marshalling data between processes via AIDL and intents maybe?)

To clarify a bit further, I'm not trying to make this a .NET vs Java vs Dalvik holy war. In fact, as you said, in the end, it is all turing complete.

The advantage (that I saw at least) in using Mono's CIL runtime is that there is already an interpreter available as well as a slew of JIT compilers for several architectures. And keep in mind, using CIL does not tie you to the .NET libraries. Just as Android currently converts .class to .dex, you can surely do a .class to CIL (Not that I want to make that sound like a trivial task, or that I would know how that would be done). Basically leveraging the abundant Java classes under the Mono runtime.

Koush said...


Forgot to address a couple points of yours:
"Who says that it's very important to have a JIT compiler, when most code is executed in native libraries anyway?"

For one, the Android team has said so themselves. It is definitely on their list of things to do.

Any references to reviews of the G1 where performance was considered not to be good enough?

I can write one if you want. :) Joking aside, if you really want to write cool applications for Android. Such as rich 3D games that you find on iPhone (lighting, collision detection, shadows, etc, etc), you will be writing it in native code and calling it through JNI. That's clumsy.
Hell, I needed to a function in an application that converts a small image (320x480) from 32bpp RGBA format to R5G6B5. That took Dalvik around 15 seconds to do.

m said...


Mono does support multiple applications per process. These are called application domains and could be used to share the "heap" or even share more of the regular startup code.

Application domains are *typically* used by Web servers where each individual application thinks that it is a full process, but they are merely illusions created by Mono or the .NET runtime.

This technology can be used for desktop applications or if desired for a setup like the Android, where a single VM runs that hosts multiple processes.

It could be implemented by having a "host" and then then instead of using "mono program.exe" you would do "loader program.exe" that merely informs the host to load a new executable in a fresh app domain, execute it, and when finished destroy the AppDomain.


Markus Kohler said...

Hi Koush,
Witt "isolation of applications" I meant that in Android each application is running in it's own process and can crash without crashing the whole phone and still be able to share data and code.

This is not suppported in Mono as far as I know.

I doubt Mono without an AOT supports code (and data) sharing.

JIT's currently dominate in the Java world for a good reason.
They can outperform AOT's, because the can decide to reoptimize Code based on profiling data.

Yes,I hope a JIT for dalvik will come soon:)


Koush said...

Hi again Markus.

As Miguel stated, Mono (and .NET) does exactly what you claim/think is unique to Dalvik with regards to sandboxing and heap sharing. Check the post right above yours, or take a peek at AppDomain class in .NET. AppDomains have been around in .NET since version 1.0.

David said...

Thanks for the great post.

I suspect licensing is an issue. Currently the mono runtime is under LGPL, and G1 is an embedded device.


"If you manufacture a device where the end user is not able to do an upgrade of the Mono virtual machine or the Moonlight runtime from the source code, you will need a commercial license of Mono and Moonlight." The G1 operating system is not user upgradable, so it does not qualify.

Dalvik and Android, on the other hand, are under the Apache License, and are in a sense "more free" than mono.

License issues aside, IMO CIL/Mono's precompiled shared libraries would allow the same cross-process code-sharing Dalvik supports while also allowing native code speeds.

David said...

btw... "app domains" are not the same as Dalvik sharing.

App Domains are within a single process. If there is a runtime or native-code bug in an App Domain, the other App Domains can be compromised.

Dalvik sharing occurs across process boundaries. The more direct analogy in CIL is simply using multiple processes and using precompiled shared-libraries to share code across processes.

Koush said...

Interesting point about the LGPL clause David!

But, that hasn't stopped Google from consuming other LGPL projects. For example, Android currently consumes several other LGPL projects. Notably key Browser components which are based on WebKit (WebCore, JavaScriptCore).

Hell, the Linux kernel itself is GPL, and you can't replace that on a user's device...

danfuzz said...

>Do you mind explaining the scenarios where [heap sharing] would be useful?

It's used in Android to amortize the RAM footprint of the large amount of effectively-read-only data (technically writable but rarely actually written) associated with common library classes across all active VM processes. 1000+ classes get preloaded by the system at boot time, and each class consumes at least a little heap for itself, including often pointing off to a constellation of other objects. The heap created by the preloading process gets shared copy-on-write with each spawned VM process (but again doesn't in practice get written much). This saves hundreds of kB of dirty unpageable RAM per process and also helps speed up process startup.

On the licensing issue, IANAL so I cannot authoritatively outline the reasoning, but the Android project does in fact avoid GPLed code in device builds except for the kernel, and the uses of LGPL in userspace are very carefully arranged. It would be a very hard sell indeed to have the project start relying on a piece of LGPLed code in every application process, in that one of the goals of the project is to allow for proprietary closed-source applications to be run on the resulting system.

(Note: I happened to check for follow-ups here after I posted my initial comment, but I would recommend posting to android-discuss@googlegroups.com if you are looking for a forum with a better chance of getting Android devs to notice.)

Anonymous said...

I believe you didn't understand Dan's presentation after all.

The Android security model imposes that each application runs
in its own isolated process, which forces you to implement
one-whole-VM per process runtime model. In case you don't
realize this, this is different from AppDomain because it
allows, among other things, your application to run arbitrary
native code without risking compromising the system's security.

Dalvik is thus designed to share as much code *and* data as
possible between processes. That's why there is an initial
"zygote" process that loads a very large number of system classes
at boot time (where loading each class requires running code
and allocating objects in the heap), which is later forked with
copy-on-write semantics when a new application needs to start.
Consequently, this speeds up the startup of new applications

I don't think Mono supports any of that.

Finally, Dalvik design doesn't preclude a JIT, it's just that
this is not necessary for a large number of applications and
that having a really good interpreter was more important to
ship a 1.0 device.

Anonymous said...

Mono is for Windows coders who can only write code with their point/click menus that tell them what to do and now have to make it run on 'nix. 'nix coders (real coders) have no use for Mono or Windows so all this talk is useless nonsense.

Anonymous said...

It's interesting to see that you probably chose a benchmark that would run poorly by an interpreter (nobody really computes a Fibonacci sequence with getter/setters on containers)

Do you have performance numbers for interpreted mono, I would find them much more interesting.

m said...


It is correct, AppDomains do not offer the exact feature that Dalvik supports, but if memory consumption is an important issue there are a number of ways of addressing that:

* Using AOT compilation which generates PIC code that can be shared across multiple processes in the same way that C libraries do today.

* AOT can be performed on the target device to optimize/tune for that particular piece of hardware (Unlike x86, I do not believe that we have any major ARM-specific features today, but they could exist at some point).

As for sharing the heap, I can think of only a few point cases where this would make sense.

For instance, if there is really a process isolation setup in Dalvik, the most important feature is to keep the heaps separated, not shared or corrupting the heap in one process (due to bugs, security errors or whatever other reason; The same reasons against AppDomains) then you would be corrupting the heap of other processes and bringing the same instability that AppDomains bring.

The cases where I think it would be useful to some extent to share the heap is for startup computed tables, translation tables and a few routines like that. KDE uses a setup like this where all the shared data is computed first, then the process forked, and libraries dlopened into the process to maximize this sharing.

A setup similar to this would be trivial to implement.

If security and isolation are of such fundamental importance, I do not think that Heap sharing is that good of an idea


m said...

@Anonymous, I have been programming in Mono for eight years, and been using Emacs almost exclusively.

Anonymous said...

Miguel, just to answer your two points:

1/ except for really tiny loops, native code is much more dense than the corresponding Java or Dex bytecode (easily 2x or 3x larger). PIC makes the code even larger. Just ask the people who implemented the Mono JIT. AOT is nice when you have the free disk space but not always affordable depending on your flash budget.

2/ the Dalvik heaps are shared copy-on-write, which means that they are properly isolated from one another. they contain objects created at initialization time, and most of them will never change; so sharing is high.

To clarify, the main constraint should be stated as "avoid using too much memory when your security model requires that your run 10 or 20 different VM processes on the system".

I don't think Mono can do that efficiently, and I'll be surprised that it could acquire the capability with trivial changes.

However, I'd be happy to be proven wrong. It would be a great feature for embedded Mono.

Iker said...

VM process isolation and cleanup are re-implementations of two features that Linux can handle very nicely with traditional processes. That the Dalvik VM builds on processes, rather than reimplementing them, is a refreshing and very welcome development. More so in light of the improvements to Linux over the years that have honed processes to the point where many of the old complaints against them no longer hold.


Anonymous said...

Some cool things are happening with Mono.

First, Mono on iPhone is taking off.

Also, IKVM.NET lets Java run on Mono.

m said...

Hello Anonymous,

You are right that AOT code can take more disk space than bytecode, but you are missing an important point

You do not need AOT for every single executable, to begin with, chances of you running twice the same program on the phone are zero, the OS or the apps are likely enforcing that you have a single mail app, a single IM client, etc.

So the majority of the sharing would come from AOTing key libraries like mscorlib, System, and whatever other Android-specific system library you need. You would have to pay the price for those, but only for those.

Depending on copy-on-write semantics from the kernel merely for initialized data merely points out to the scenario that is handled by systems like `kdeinit': startup initialization of runtime computed data.

In Gnome for example, plenty of this data was transformed into mmap()able data so that the kdeinit-like setup was not required, I do not see why this would not be useful in Android.

If you have data, please share what kind of data is shared across these processes that depends on the kernel copy-on-write facility, because although a general purpose solution is not a trivial change as you point out, I would think the 80/20 rule applies and youc an get 80% of it with relatively simple changes.

Mike Hearn said...

Miguel, translating data into mmap()able form is a good start, but a lot of Java/.NET frameworks just create objects, that's what they do. Lots and lots of objects. Objects to hold their configuration, objects to manage their logging, whatever. So sharing these objects via copy-on-write is useful, and the Dalvik design gives you this "for free" with no need to go in and rewrite code.

Anonymous said...

Miguel, you seem to advocate that, with some additional work (which you tag as "trivial" though I seriously doubt this is true), Mono could be a very good VM to base Android on. Of course, this is possible; I just believe it's going to be a lot more hairy than you imagine. And nothing will come out of this discussion unless you start playing with the Android sources and port the framework on top of Mono with a Java -> CIL compiler.

Google made a different decision based on various other factors (e.g. the GPL license), and instead they came up with a different solution that solves the original problem.

The solution isn't Mono, doesn't have a JIT yet, whatever. But it works reasonnably well and ships on commercial devices now. And for a 1.0 version, the Dalvik VM isn't really bad, in my no-so-humble opinion.

By the way, there are several anonymous posters in this thread, I'm certainly not the one who wrote "Mono is for Windows coders, etc..." :-)


shamaz said...

Actually, we can build openjdk6 for ARM.
But anyway, as you say, a fibonacci test is quite naive ;)

I also never undestood why google wanted to make their own bytecode format... (except for marketing reasons)

Anonymous said...

Thanks for the review. I just started writing code for the android os on the g1 phone and feel the vm is very slow. I was looking for just this sort of information to verify my imporession.

It appears that the garbage collector is a very weak point too.

I look forward to a JIT enviroment in hopes of getting reasonable performance.

Anonymous said...

poor performance of the GC was nothing to do with this thread an it will certainly not improve with a JIT.
and most of all when you say "poor performance of the GC" you probably want to say "badly designed applications that create heaps of objects just to throw them away"

Anonymous said...

It appears the poor battery life has little to do with anything you mention in your analysis.

People don't like to talk about it much but the G1 is constantly running applications in the background. Specifically, it is running various system services, collecting statistics and reporting those statistics back to Google. Furthermore, by default the G1 comes with synchronization enabled which is constantly syncing email, contacts, and calendars with Google too.

As a result, the phone for most people, 33% to maybe 60% of the time the phone is running down your battery when it has no need to do so.

Additionally they seem to have some bugs with their WIFI/DHCP drivers which has a tendency to drain the battery faster than it should. And, lets not forget about applications like Locale and a popular shpopping and weather applications which are seemingly excellently designed to assist in not only draining your battery but assist in exploiting the WIFI/DHCP bugs which further burden your battery.

No platform, framework, fundamental design issues are required to explain the G1's poor battery performance.

How do I know this? I'm getting ready to place an application on the market which specifically works to limit these types of activities. My standby time has moved from ~2-3 days to estimated durations of up to 16-days.

Long story short, HTC has some driver bugs to fix. Google has some framework/driver bugs to fix. Google also has some core services which direly need attention. Lastly, the framework needs to be further improved to allow applications to better assist at power management tasks.

Anonymous said...

I cant even finish reading your article after your first statement.

O goodness! Chrome v8 doesn't do so hot in graphing arbitrary performance! oh well, good try google. not.. google for and open any intense javascript game or physics simulation and play it in chrome, then play it in the latest firefox that uses tracemonkey. Tracemonkey is a joke. Theres some real world for ya

werpu said...

I saw the presentation and it looked rather thoughtful to me, the Dalvik creator made good decisions regarding the domain the application runs in, process isolation on programs while still being able to share data, cut down on memory consumption by using a register based vm instead of a stack based etc...

I think for a 1.0 VM it is a very good approach, and yes as it seems the entire VM is not the fastest and Google knows that (and does not even deny it) hence they for now promote it mostly as a glue layer (sort of scripting layer) and have pushed everything under the earth in native functions. Those who have performance critical code running in Dalvik and having to move down to native level however are poor, going with JNI was the only non sound decision in the entire thing, JNI is not the best way to integrated native code with the VM, others have done it in better ways pre Java.

As for not going with Mono in the first place, my personal guess is simply the license, Monos license is not compatible with the relatively free apache license the rest of the userspace uses, god knows what would have been chosen if there was not Harmony as Apache based java implementation and the Apache licensed commons library. I think Google simply made the decision to roll their vm to stay on Apache2 in userspace but cope with the problems you face on phones, not going with J2ME was a sound decision as well, first they wanted to get fast control over that stack, secondly J2ME is hell to begin with, it was never really good and always crippled by the phone vendors who never wanted to get important APIs into the stack so that they can bind the developers to their implementation.

The last issue probably was that the work on Dalvik started around 2006 Mono never was as far as it is today, but I think the biggest issue which prevented Mono to be adopted as base (or C# in that matter) was its license. No Apache or BSDish license, there is no way you make your way into the userspace of Android it is as simple as that.

karbon said...

Well. If you write a mono OS for android... let me know. I'll help.

karbon said...
This comment has been removed by the author.
karbon said...
This comment has been removed by the author.
Anonymous said...

Could you please add a comparison with JIT from 2.2 enabled?

Thanks a lot! (If you can.. :D)

Anonymous said...

Nice article.
I've seen only android source code.
I browsed a little bit the sources written using C++ syntax.
That language should be called GROOGLE with spageto-objects.
Noting more.

Anonymous said...

Could you also include Vala ( http://live.gnome.org/Vala ) to your benchmark.

Anonymous said...

You brain should hurt since you knew you were unfairly comparing a interpreter code vs native code and still based all you reasoning on that false ground.

How many people/developer prefer mono over Java?

Anonymous said...

jesus... its like a married couple lol

Anonymous said...

and ian totally agrees with me!

Anonymous said...

Stop hurting yourselves over this will help i hope... Enjoy ;)