Building Mono using the Android NDK

I finally got around to looking at the Android NDK yesterday. It is basically a subset of the Android Source build environment, but it has all the bits necessary to compile Mono. It only took a bit of finagling with the Android.mk to make everything work right. Building for Android is pretty easy now; the longest part is actually downloading and building the mcs assemblies. There’s a script to handle that though.

The androidmono repository has updated instructions.

Mono 2.5 for Android Released

On and off over the last few weeks, I’ve been working on getting the Mono 2.6 branch, and its super awesome soft debugger, working on Android. I’ve also spent some time documenting and cleaning up the currently horrendous build process involved (I was really the only one that knew how to do it). Anyways, the Android Market now has the current 2.6 branch of Mono available for download (2.6 is not complete yet, hence the reason I am referring to it at 2.5). This release supports Cupcake, Donut, and Eclair!

The project is hosted on Github in the androidmono repository.

For those interested in building/hacking this themselves, here are the instructions:

  

Developing and Debugging .NET applications on Android with Mono(Develop)

Lots of cool stuff has been happening in Mono recently! First MonoTouch was released, which allows the development of iPhone applications in C#/MonoDevelop. The catch was that there was no debugger support. To address this glaring problem, Mono released created and released a Soft-Mode Debugger, which is basically an in-process debugger that lives in the runtime. After peeking at the code, I realized that porting this to make it work on Android would be a cinch! The result: you can now create a console application in MonoDevelop, deploy it to an Android emulator or device, and debug:

md-android

 

I blogged a while ago that I was working on an interop layer for Mono to allow it to utilize the Android SDK. This is mostly complete. Once I write up a tool to generate convenient wrapper classes, writing full blown applications for Android will become a reality!

Build Configuration for the HTC Magic

MyTouchWhite

A few days ago, I released build scripts that would allow developers to make images that target the T-Mobile MyTouch. I finally got a hold of a regular Magic (a Sapphire 32A) and have created a corresponding project to support building for this device. So, this will now work:

. build/envsetup.sh
lunch htc_magic-eng
make

Please note the instructions regarding cherry-picking certain changes from Donut into your Android repository. The Sapphire 32A uses HTC’s kernel offsets in the boot.img, and not Google’s. Without those changes, your phone will not boot.

Build Configuration for the T-Mobile MyTouch

MyTouchWhite

The HTC Dream is the only supported Android development device. But, it is possible to do development on other devices, by cannibalizing the various drivers that you can find on them. This usually involves modifying/repacking an official update.zip. However, with a little work, it is possible to use the Android build system to create legitimate fastboot images!

After following the instructions to retrieve the MyTouch build configuration and extracting the proprietary bits from a running device, you can do the following to build images for the MyTouch:

. build/envsetup.sh
lunch htc_mytouch-eng
make

There is also a mkupdatezip script available in the Downloads section of the repository to assist in creating update.zip files as well!

The images will then be found in out/target/product/mytouch-open. A build configuration for the Magic will be coming soon! (It is the same board as the MyTouch, a Sapphire, but different enough to need a separate config.)

Note:

The general process in creating this board makefile was basically the following:

  1. Start with the Dream build config (dream-open).
  2. Extract the kernel from a running device and replace it.
  3. Modify the extract-files.sh script to adb pull the correct proprietary files.
  4. Modify the .mk files to reference the proper proprietary files.
  5. Grab the wlan.ko off the device and replace the one at the repository root.

MVP!

images

I got a recognized as a Microsoft Most Valuable Professional for my work over the past year and a half with Windows Mobile. I get access to lots of super awesome benefits, juicy gossip, and top secret NDA stuff! Cool!

Command Line Image Shrinker (shrinkimage)

mario_hit-and-shrink

One of the most tedious things I seem to do on a regular basis, for both blogging and developing, is downloading an image off of Google Image Search and shrinking it. Having to do a scale a lot of images to a similar size can become really annoying. I ended up writing a command line tool to shrink an image given a percent or a maximum dimension. Probably won’t be useful for many, but who knows!

Grab shrinkimage off of GitHub.

GitHub Is Pretty Magical

Warning: Weak browsers (aka Internet Explorer) may not be able to render this blog entry in its glory. Use a real browser!

If you browse on over to the Code and Applications section of this site, you will notice that it takes a little time to load. That’s because that entire page is being dynamically generated by your browser via the GitHub (JSON) APIs. What you see on that page is a live, up to the minute, list of every public project in my GitHub repository.

And here’s the JavaScript code that does it:

But that’s not really the magical part. Here’s the code that inserted the code above into this page:

<pre class="githubfile" file="githubprojects.js" repository="GithubProjects" user="koush"></pre>

The first bit of JavaScript code you see above is a live view of the file that is checked into my GitHub repository… and your browser is using the GitHub API to view it, highlight it (using SyntaxHighlighter), and then render it to your screen. Kudos to Github for a fantastic API!

I wrote GithubProjects up today as an exercise in learning jquery, AJAX, and some other Web 2.0 acronyms. Of course, all of this code is available in my GithubProjects repository.

Extension Methods to Die For

This will be a running blog entry that I’ll update periodically when I have new ideas.

Enum.ToUserString();

string.IsNullOrEmpty:

public static bool IsNullOrEmpty(this string str)
{
return string.IsNullOrEmpty(str);
}

Android’s Linker makes Baby Jesus Cry

release-1.0 branch:

static int open_library(const char *name)
{
int fd;
char buf[512];
const char **path;

TRACE("[ %5d opening %s ]\n", pid, name);

if(strlen(name) > 256) return -1;
if(name == 0) return -1;

fd = open(name, O_RDONLY);
if(fd != -1) return fd;

for(path = sopaths; *path; path++){
sprintf(buf,"%s/%s", *path, name);
fd = open(buf, O_RDONLY);
if(fd != -1) return fd;
}

return -1;
}

cupcake branch:

/* TODO: Need to add support for initializing the so search path with
* LD_LIBRARY_PATH env variable for non-setuid programs. */
static int open_library(const char *name)
{
int fd;
char buf[512];
const char **path;

TRACE("[ %5d opening %s ]\n", pid, name);

if(name == 0) return -1;
if(strlen(name) > 256) return -1;

if ((name[0] == '/') && ((fd = _open_lib(name)) >= 0))
return fd;

for (path = sopaths; *path; path++) {
snprintf(buf, sizeof(buf), "%s/%s", *path, name);
if ((fd = _open_lib(buf)) >= 0)
return fd;
}

return -1;
}

Summary: Android’s linker used to look in the current working directory to resolve library references. Now it doesn’t. (And it never used LD_LIBRARY_PATH at all) This is really only annoying for 3rd party command line applications which have references to libraries that aren’t part of the standard Android build (and contained in /system/lib). I need to figure out how to make gcc, et al, create binaries that store the full path to the libraries they reference, rather than just the base file name...

Edit: I tried patching it to use LD_LIBRARY_PATH, but calling getenv from within the linker would always return NULL. Not sure why. So I did the next best thing and added “.” to the list of search paths. I think that is default behavior anyways on other Linux based systems (will verify later). Submitted the change, hope it gets accepted.

I Need a Bigger Desk at Home

So I have room for more gadgets.

IMG_1231

Breakpoints Not Working When Debugging in Xcode on a Voodoo Kernel Hackintosh

I recently built and OSx86 machine, but I had never actually tried doing any (native) development[0] on it as I had been using my MacBook for that purpose. Tonight I finally fired up Xcode on the new desktop, created an iPhone application, and found that my breakpoints, which were being correctly set and detected, were not ever actually being hit. It was as if they were being ignored completely.

After Googling around for some hints as to why this would happen, on a hunch I tried debugging a C application that targeted the OSx86 machine itself: same issue, no breakpoints were hit. My desktop and laptop have the exact same applications and tool chains, and the only real difference between the two machines from a software standpoint was the kernel. This led me to believe that the Voodoo kernel used on the desktop had some debugging issues. Doing a search on this confirmed my suspicion, as the Voodoo team has an open bug on this exact issue. Luckily, a few searches down, I also found a workaround that happens to fix the issue for Intel based machines.

When starting your system, enter the following as one of your boot flags:

std_dyld=1

From the voodoo kernel documention:

std_dyld= On-the-fly opcode patching requires an elaborate technique to patch dynamic libraries. Because of this, the kernel includes its own specialized copy of dyld, and chooses the best one depending on your CPU. Use this option to force the kernel to use standard dyld (pass value 1) or its specialized copy. (pass value 0), regardless of your CPU type. Note: AMD insta$s might fail to function if you specify this boot-flag!

While Googling, I saw numerous forum posts about this issue on OSx86 machines, with no one realizing there is a workaround. So hopefully this post matches enough keywords to save someone an hour or two of searching. :)

Edit/Note: If you are booting off a kernel on your EFI partition, make sure that it is named "mach_kernel" and that a copy of that exact kernel is available at /mach_kernel on your system partition. (I had issues with VMWare not working until I did this, and suspect it may cause issues with XCode too)

[0] I have been using it to do Android development, and have no issues debugging the Dalvik VM. But I am guessing the Dalvik VM on an emulator/device does not require the same low level kernel hooks on the host.

Klaxon and Screenshot – Android 1.5 Version

Quick update: I have made fixes to both Klaxon and Screenshot to support Android 1.5. However, I am still hearing reports from users that Klaxon does not work on Haykuro builds; the device goes into a reboot loop. Haykuro does not have a build based off the T-Mobile OTA update, so I can not provide support for his build- I believe he is using older/pre-release cupcake pieces in his current builds. Once he updates, I may look into the issue further.

I am currently running JF 1.5, and have no issues with either application. Unfortunately, I can not spend my time supporting every fork/tweaked version of Android (Haykuro, DudeOfLife, etc, etc). The only supported builds will be Retail and JF.

Converting Enumerations to User Readable Strings in .NET

Suppose you have the following scenario: You have a function that can return multiple return types codes via an enum value. For each of those enum values, you want to inform the user the result of the operation by some user friendly string.

A common implementation I see is some sort of switch statement that resolves enumerations to a string by way of a switch statement or hash table or something. And whenever that string is needed, call into said method.

And although this is viable, this creates a disconnect between the enum and it’s actual string literal definition. In addition, for each new enumeration value, that giant switch statement needs to be updated with a new string value. If there were only a clean way to map an enum to a string value automatically… and there is! With clever usage of attributes, reflection, and extension methods, one can do something like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Linq.Expressions;

namespace EnumString
{
[AttributeUsage(AttributeTargets.Field)]
class EnumStringAttribute : Attribute
{
string myValue;
public EnumStringAttribute(string value)
{
myValue = value;
}

public override string ToString()
{
return myValue.ToString();
}
}

static class ExtensionMethods
{
public static string ToUserString(this Enum enumeration)
{
var type = enumeration.GetType();
var field = type.GetField(enumeration.ToString());
var enumString = (from attribute in field.GetCustomAttributes(true) where attribute is EnumStringAttribute select attribute).FirstOrDefault();
if (enumString != null)
return enumString.ToString();
return enumeration.ToString();
}
}

enum AuthenticationResult
{
[EnumString("This username is not registered.")]
NotRegistered,
[EnumString("Incorrect password.")]
BadPassword,
[EnumString("Logging in...")]
Success,
}

class Program
{
static void Main(string[] args)
{
Console.WriteLine(AuthenticationResult.BadPassword.ToUserString());
Console.WriteLine(AuthenticationResult.NotRegistered.ToUserString());
Console.WriteLine(AuthenticationResult.Success.ToUserString());
}
}
}

Delaring string values can simply be done inline, and the usage is simple as well! Just call the new extension method to get your user friendly string!

(Of course, for localization, you may want to map the enum to a string resource rather than a hard coded string value; but a similar approach can be used.)

Language Mavens, Tool Mavens

mvn

Recently, a fellow coworker referred me to an article by Oliver Steele about Language Mavens and Tool Mavens. The synopsis is that there are two types of deveopers:

  • Language Mavens are those that leverage a features of the best language to complete the task at hand; the IDEs/tools used do not matter so much.
  • And conversely, Tool Mavens leverage the power of mature, full featured development tools to complete their tasks: an integrated editor, debugger, code refactoring, etc. The language itself does not matter because, it is just the same old set of classes and methods with slightly different names.

Naturally, this led to a bit of introspection, which resulted in the diagram that you can see above: I think I am somewhat between both camps, in that I choose and tend to use a variety of languages on a frequent basis to get my work done. But I find myself more so gravitating towards the languages with the richer tools, because they make suit my programming habits better.

Tools

The tool I require a language to have before I even consider touching it is an integrated debugger. Well, to qualify that statement, I require that for applications with complex execution paths and sustained runtimes. That means for bash, Ruby, Perl, and JavaScript, it doesn’t matter, because printf debugging will suffice. [1] However, if you follow this blog, you may have read about my work on porting Mono to Android. Although Mono has a great integrated x86_64 debugger, none exists for the ARM platform. Doing development using Mono on Android is not really a viable option to me until the debugger exists (or I get around to implementing it myself).

The other tool I prefer a language have is auto complete (aka content assist). It’s purpose is two-fold:

  1. When learning a new language/API/platform, content assist severely lowers the learning curve. Instead of hunting down method signatures, class names, and descriptions on Google, it is built into my editor. This ultimately saves me a significant amount of time.
  2. Eventually, when I master the language, auto complete serves a different purpose: it reduces the amount I need to type by a little under 50%. [0] This is interesting, because a language without code completion would need to be able to complete a task in half the lines over a language that does support it before it actually becomes a more efficient use of my time.

Languages

As Steele’s article states, C# and Visual Studio, as with everything with Microsoft, is an exception to the rule. The new languages (and features) are released at the same time as the tools. And generally, I find myself getting more excited about the languages aspects, than the tools. For example, with VS 2010 (which should be in beta very shortly I hear), we will get: F#, Dynamic Language Runtime (Iron*!), and full expression tree support in C# (to support the DLR and the new dynamic keyword). [2]

Lately, in various projects, I find myself wanting to do more in the way of code generation. Be it code being generated from a model or modifying/creating new code at run time. So that is what spurred my desire to learn Ruby and Python (of which I currently favor Ruby). Though I don’t think this is something I can not live without.

As for language features that are to-die-for, I find it hard to cope without closures and generics, of which Java has neither. [3] The lack of generics is why I also did not really adopt C# over C++ (templates) until version 2.

 

Where do you fall as a language/tool maven, and why?

 

 

 

 

[0] I did this by recording my keystroke count using a free Windows application, and disregarded keystrokes for any navigation or correction keys: backspace, delete, and arrows. Then I compared the results versus the character count of the code written.

[1] Incidentally, I realized I have learned around 5 new languages in the past year. Though I can only claim proficiency in one, the language with the best tools: Java.

[2] Using expression trees in C# to Frankenstein together some new code at runtime seems like there could be infinite Lisp-ish possibilities. Though you can dynamically create new code at runtime using Ruby/JavaScript, you can’t modify or access the expression trees themselves like Lisp. C# doesn’t allow you to look at any expression tree at runtime either, but it does provide language integrated support for building expression trees (and has in a primitive form since C# 3), which can have a similar effect.

[3] Type erasure is not what I would call a real implementation of generics. Anonymous inner classes are not closures.

Gitweb Line Highlighting

codehighlight

I made a few more changes today to support line highlighting of gitweb blobs. Like SyntaxHighlighter, the line highlighting is also completely JavaScript driven. When you highlight a line, it updates the fragment portion of the URL in your browser. Then when you send the URL (with the fragment) to someone, they will see the same highlighted code.

Click here for a sample of code highlighting.

Fun exercise in my quest to catch up and learn JavaScript, CSS, et al.

I couldn’t do this by modifying the query string, because that causes a browser reload when changing window.location. I could perhaps have made a “create hyperlink” button that puts the hyperlink with a modified query string into your paste buffer, but that isn’t as easy as just copying the address out of your browser address bar.

Gitweb Support for SyntaxHighlighter

After getting Gitosis set up on my Hackintosh, I set up gitweb as well. But vanilla gitweb is really ugly; it’s almost as bad as sitting at a console typing the git commands manually. So I spent a good day or so trying to tweak gitweb to work with SyntaxHighlighter. For the longest time, SyntaxHighlighter simply would not work on any page at all. After prodding for a while, I finally figured out that it was due to gitweb.cgi returning the content-type as “application/xhtml+xml” instead of “text/html”.

Click here to see a sample gitweb repository with SyntaxHighlighter enabled. Navigate around the projects, and click any of the language specific blob links (.c, .cs, etc) to see the new highlighting.

// This is SyntaxHighlighter, and
// it now works in gitweb!
if (true)
{
Console.WriteLine("Hooray!");
}

Head over to my GitHub repository to get my git fork with the tweaked gitweb and instructions!

P.S. This was my first time hacking at Perl. I feel violated.

P.P.S. The Perl brush is intentionally disabled. It is a little buggy.

Setting up a Gitosis Server on OS X

I currently run a Team Foundation Server as my source control management for my Windows based projects, but I found that nigh unusable when doing anything related to Linux and Mac development; as the only way to check in is to have a shared directory on the Mac/Linux system that Visual Studio maps to a project in TFS. Rather clumsy.

Since I started working Android, I slowly became git fanboy. The GUI tools are still unbelievably primitive (as is the case with Linux in general it seems…), but the speed, flexibility, and ease of use of the command line tools and the SCM itself more than make up for that aspect. When I set up a Hackintosh last week, I also purged my computers of all Linux VMs and I forgot to take into account that I might want one running for dedicated SCM and other Linux-ish work. But, I figured, I might as well try installing gitosis and see if it works: gitosis is just a Python script that runs in an SSH session. And, it turns out it does!

These steps already assume you have git and Python installed by way of MacPorts.

As I mentioned before, gitosis is actually run in a restricted SSH session to provide the repository access, and RemoteLogin is Apple’s implementation of an SSH daemon/server. So, the first step is to make sure you have Remote Login and Remote Management enabled on OS X:

RemoteLogin

I prefer to keep my git repository in a case sensitive partition/image. I recommend you do the same, as there may be issues otherwise. But, OS X can’t be installed on a case sensitive partition, so you might need to repartition your hard drive to include a case sensitive partition:

WDC WD3000GLFS-01F8U0 Media

Now, let’s add a “git” account that will own the repository and provide repository access via SSH. This can be a standard user account. You should not run gitosis under your account, because it restricts shell access to the SSH session, limiting it to gitosis-serve. (You should also verify that the git account has remote login access in the Sharing section, although it should available by default.)

Accounts-2

Now, let’s set up gitosis. The steps are very similar to the ones found at the site I referenced. Let’s drop to a Terminal. If you do not have an SSH key yet, generate one now with ssh-keygen.

ssh-keygen

Download and install gitosis:

mkdir ~/src
cd ~/src
git clone git://eagain.net/gitosis.git
cd gitosis
sudo python setup.py install

By default, gitosis creates repositories in the ~/repositories directory of the “git” user account. And OS X by default places user HOME directories on the case-insensitive partition. Let’s create the repository directory on the case sensitive partition, and create a ~/repositories symlink to it:

sudo mkdir /Volumes/PartitionName/repositories
sudo ln -s /Volumes/PartitionName/repositories /Users/git/repositories
sudo chown git /Volumes/PartitionName/repositories
sudo chown git /Users/git/repositories

Next we need to make sure the git user has the proper directories in its PATH variable (or you may get a gitosis-serve not found error). We add them by modifying the ~/.bashrc file located at /Users/git/.bashrc:

vi /Users/git/.bashrc

Insert the following into the file:

PATH=/opt/local/bin:/usr/local/bin:$PATH

And make the owner “git”:

sudo chown git /Users/git/.bashrc

The last step is to initialize gitosis and give yourself access to it by adding your public key:

sudo -H -u git gitosis-init < ~/.ssh/id_rsa.pub

This should give you a working gitosis installation! Administration of the git repository is done by checking out the gitosis-admin repository from git, making changes to the config, and checking back in. That is incestuously cool! So, if you can check out the gitosis-admin directory, that means everything is working great:

git clone git@your-host-name:gitosis-admin.git

 

Hopefully that worked. If so, you can begin editing the gitosis.conf file to create more repositories and add users. More information about how to set up repositories and access can be found here.

Building a Hackintosh from A to Z

happy-mac-icon

Before we resume this tutorial, let’s take a moment of silence in memory of PirateBay.
*Half Second Pause*

Continuing On – Boot 132 Installation

One of the weird things about the tools available to get OS X running properly on x86 hardware is that they are all written for the OS X. A bit a of a chicken and the egg problem: how do you exactly use those tools if you can’t install in the first place? This was also one of the first oddities I encountered in the primary guide I used for my installation: the guide calls for you to install a hacked installation disc of OS X prior to doing a Boot-132 installation. And in retrospect; it makes sense: the hacked discs are easy to install are just used as a bootstrap/failsafe/utility in case your actual Boot-132 installation goes awry. The reason I am bringing this up, is so that the prerequisites to the installation make more sense.

Step 0: Prerequisites

  • Two hard drives – It must be two hard drives, not two partitions. The installation will involve installing both the hacked install CD as well as a Boot-132 installation. Boot-132 involves creating and tweaking the aforementioned modified EFI boot partition. So if something goes wrong with the boot partition, using two partitions to do the installation would just result in two busted installations. So, that is why you should have two hard drives for this process; one hard drive will contain a backup installation and boot partition. Use the slower/cheaper/crappier hard drive for your hacked CD installation, and the good one as the target of the Boot-132 installation.
  • Patience – You will need a lot of this later on in this guide.

Step 1: Hardware Setup

Make sure both of your hard drives are connected to your PC. Have your crappier hard drive as the second one in the boot order. Remember that there will be be two OS X installations, and we are starting out by installing the hacked installation, which goes on the crappier drive. We don’t want to boot into this one automatically because once everything is said and done, this will not be the primary installation.

Step 2: Installing iDeneb v1.4 (OS X 10.5.6) [3]

Here is the guide I used to do my iDeneb installation. Much of my information came from here. I recommend also reading through that as it comes with pretty helpful pictures and such. But be mindful to follow the instructions regarding partitioning on this section of the blog, as the final setup will be different from the linked guide.

Download iDeneb 1.4 torrent off of PirateBay (if it is still around) and make a CD from the ISO.

Boot off the iDeneb CD. Hopefully you will not see the “waiting for root device” problem during boot that some people get. I personally ran into it on one installation. The issue resolution varies from computer to computer. For me, I resolved it after much hair pulling by hooking up my CD-ROM to the SATA-0 connector.

Once iDeneb is loaded, keep hitting Continue/Agree until you get to the point where you choose an installation disk. From the menu, choose Disk Utility. Verify both of your hard drives are visible in the list. Partition both drives with the following:

GUID Partition Table (this can be found under Options). The first/installation partition of each drive should be Mac OS Extended (Journaled). [0]

Name the crappy partition “Recovery”. Name the good partition that will have your retail installation “System”. Well, you can name them what you want, but that is what I called them and how I will refer to them in this guide.

After partitioning both disks, choose the crappier disk as the installation target, and continue. You should now see a OS X disk with an arrow pointing to a hard drive. Don’t press Continue yet! First press customize. At this step, you need to choose the .kexts that you will add in as part of the installation process. Make sure you choose the appropriate ones for your motherboard, video card, and network card. Don’t worry about sound and any other random peripherals; it doesn’t matter if those do not work. Keep in mind that this step is just to get a booting OS X installation, preferably with working internet access (if not, a USB drive to transfer stuff over from another computer works too).

My Personal Computer Setup [1]:

  • Core i7 920 Processor
  • Gigabyte EX58-UD3R Motherboard
  • NVidia 9800GT Video Card
  • 6GB RAM
  • 2 SATA Hard Drives
  • 1 SATA CD-ROM

In my case, I chose the following from Customize:

  • JMicron ATA (motherboard)
  • Realtek R1000 (network)
  • NVInject 512MB (video)
  • Voodoo Kernel
  • Fonts
  • Applications

Once you have chosen the proper setup for your machine, press continue to finish up the installation. At the end of this, you should a working OS X installation that will be used to bootstrap the actual installation. Do not ever run the Apple Update software on this installation. It will break it.

Step 3: Installing from the Mac OS X Retail Disc

As I mentioned in the previous post, you must have a retail disc. You probably can’t install off an OEM disc because that is a stripped down version of the installation tailored to a MacBook/MacPro.

Insert the disc into your Hackintosh running iDeneb. It might automatically start up the Installation Menu. Close that out. You don’t want to run that as it will try to restart your computer and install, which will fail miserably. Instead we will install retail OS X on a separate disk from under the current running OS X installation. To do this, open Terminal from Applications and type the following [2]:

cd ‘/Volumes/Mac OS X Install DVD/System/Installation/Packages’

Now type:

open OSInstall.mpkg

The menus should look familiar, as they will be very similar to the previous iDeneb installation. Choose the good hard drive for the retail installation. Don’t worry about formatting/partitioning it, as you already did that during the iDeneb installation. And once again, choose customize before starting the installation. But this time, you will not be presented a list of drivers and tweaks for your computer like you were with iDeneb. Instead you will see the list of OS X installation options. I recommend unselecting all the printer drivers (they are all selected by default and total to an extra 3GB). This will make the installation go much faster. Now continue on with the installation.

Boot-132 on your EFI partition

You might be tempted to reboot now into your OS X installation on your main hard disk. But don’t! It won’t even boot, because there is no boot loader set up on that disk yet!

Download the EFI Boot Installer. Credits to Wolfienuke for writing this; don’t download the v3 version from his post though: there were several bugs that made the scripts not work properly for me. I ended up patching them, notifying him of the fixes, and then uploading the fixed/linked version to my site.

When you unzip the EFI Boot Installer, there are two things of interest:

  • Extensions – This folder contains the .kexts for your machine. The download currently contains the .kexts I use for my setup. They probably won’t work for you unless you have the exact same hardware. Typically you will want to download the .kexts that work for your hardware.
  • install.command – Run this to set up your EFI boot partition. Don’t do this yet though.

Step 4: Extensions

First, delete or move all the Extensions that I provided. You will probably want to start from scratch, unless you are using my motherboard.

Ok, above I mentioned that installing OS X requires patience. And that is because finding the .kexts that work for you can be very tedious and induce insanity.

Basically, first start off by googling and downloading the .kexts for your motherboard, video card, and network card. Don’t worry about peripherals at first; let’s just get a working retail installation. That’s all the advice I have for you: Google wisely and dig through the search results on the InsanelyMac forums.

Step 5: Using the EFI Boot Installer Tool

Now that you have the proper .kexts, it’s time to set up your boot partition.

  1. Double click the install.command, and you will be asked for your administrator password.
  2. It will ask you which disk your OS X installation is on. Choose your “System” partition.
  3. When prompted whether you are choosing Installing or Updating, always choose Install. There seem to be bugs in the script with the Update, which I did not bother to fix.
  4. When asked if you would like to edit com.apple.boot.plist, once again, always choose yes and select the following:
    • mach_kernel
    • Press enter for boot flags, but you may want to tweak this if you don’t have an i7 920 processor. If you don’t have an i7, simply use “-v”. If you have an i7 better than the 920 or are overclocking, you will need “-v busratio=X” where X is the clock multiplier you are using for your processor. You can look this up in the BIOS.
    • Press enter for timeout.
    • Press enter for EFI String. More on this later.
  5. Press enter (no) for increasing version numbers. I have tried this, and this just caused problems. Not sure what it is for.
  6. Press “y” to continue.
  7. Before rebooting, first peruse through the install.log and see if there are any glaring errors. There generally shouldn’t be.
  8. Now confirm the reboot.

Up until now, you have been choosing your second hard drive from the boot menu. You can now finally choose your first hard drive. If you have the proper .kexts, your retail installation will boot properly.

Step 6: More on Extensions

Hopefully your retail installation is working (somewhat). In which case, you will want to set up the rest of your connected devices. Navigate to your Recovery disk, and find your EFI Boot Installer folder. You will need to go searching for .kexts for your Sound card, power/sleep fixes (maybe), and whatever else you may have. Rerun the EFI Installer in the same fashion after updating your Extensions directory with the new .kexts, and reboot. Keep doing this until you have a working system. :) [4]

If at any time your retail installation stops working, simply reboot into your Recovery disk and undo whatever change you made to the EFI Boot Installer, and rerun it to recover it! (Now you should understand why you need that second install, haha.)

Extra Credit:

EFI Strings

Another method discussed on InsanelyMac for getting OS X to recognize your hardware is the usage of EFI strings. I’m not really sure how it works beyond that. Although my video card (NVidia 9800 GT) worked for the most part when my retail install, dragging windows left some strange transparency related artifacting. I fixed this by using this guide to find the EFI string for my video card. Once I got the EFI string, I placed it into the efistring.txt found in the EFI Boot Installer and reran the tool.

Using VMWare and Parallels

If you want to run VMWare and Parallels to run Windows, it will probably cause your system to kernel panic due to using a custom kernel which is found in the EFI boot partition, and not at the usual /mach_kernel in normal installs. I fixed this by copying the custom kernel to the expected location by doing the following:

  • cd /
  • sudo mv mach_kernel mach_kernel.old
  • sudo cp /path/to/efi/installer/Kernels/mach_kernel .
  • sudo chmod 644 mach_kernel
  • sudo chown root mach_kernel

This got Parallels working for me I haven’t tried VMWare, but that should work for it too. If not, you can also try running:

  • sudo /Library/Application\ Support/VMware\ Fusion/boot.sh –restart

Done!

Well, that’s it. I realize the guide probably won’t give you an exact set of steps that will for-sure get OS X running on your PC hardware. There’s no magical way to get a retail OS X install without some blood sweat and tears towards finding your perfect .kext combination. But hopefully the information found in these two posts helped you understand how things work, so you aren’t just poking in the dark twiddling random settings praying something works. :)

And, feel free to run Apple Update on your retail install; unlike the hacked installation, it will continue working properly and not break because your critical system files are on a partition that Apple does not touch.

 

 

[0] You can not install OS X on a case sensitive partition.

[1] The .kext files I will provide at the end of this tutorial will apply to only my computer. Though if you have similar setup, you can probably scavenge some of them.

[2] The blog is munging my quotes. Both quotes are single quotes, the button to the left of Enter.

[3] If you have problems installing iDeneb, you can also try iPC, iAtkos, or Kalyway. One of those is bound to work well enough. You only need it to be able to boot and read your CD-ROM and ideally have network access.

[4] I still have not managed to get sound working on my motherboard, Gigabyte EX58-UD3R. I don’t really care though, as my headphones are hooked up to my laptop anyways.

Have Your Cupcake and Eat It Too

untitled

Google released the Android 1.5 SDK today. They also pushed their internal Cupcake branch upstream to the main Android project! I sync’d up my repository and started the build. I ran into an issue with lib0mxCore.so not being found, so I patched the build process to pull it off the device. I’m not sure if that is the right solution; but it builds and runs!

The immediate differences I noticed: the UI scheme has changed significantly. It looks considerably better than before. The soft keyboard seems to be out of the prototype stages and is fairly usable as well.

Click here for a zip file containing the Cupcake images. I recommend also downloading the cupcake scripts I wrote up to flash to a phone or run in an emulator.

To flash to a phone using my scripts on a Windows system, extract both zip files into the same directory and simply run while your phone is in fastboot mode [0]:

flash.bat

To run the emulator, you must have the Android SDK:

emulator.bat

The directions are similar for Mac/Linux (flash-mac.sh, flash-linux.sh)

[0] To start up in fastboot mode, you must have a rooted/dev phone and the engineering bootloader. Turn your phone off, hold camera, and press power. Then connect your phone to you computer and press the Back button.

Building a Hackintosh from A to Z

I’ve been slacking on this blog. I guess you can say my computing life has been in a state of turmoil recently. A mid-life crisis per se. After much soul searching, I decided to write my first check to the embodiment of all that is evil, the reincarnation of Skeletor, Steve Jobs. For a long standing zealot of the Microsoft Windows camp, this is perhaps the highest form of treason.

skeletor jobs

Anyhow, this post isn’t really to talk about why I switched; my Windows machine didn’t go anywhere. It is just sitting in a VM in my Mac now. All said and done, I am now a somewhat proud owner of a MacBook, iPhone, and… a Hackintosh?

Building the Hackintosh has been one of the most frustrating experience of my life. This was mostly due to the combination of my utter lack of knowledge of the subject matter and the abysmal quality of the guides that are available. Speaking of the OSx86 guides. They are perhaps the most poorly written, ill-explained walkthroughs ever written. They are only somewhat useful if you have read every last one, and have been failing at your OS X installation for 4 weeks. Those 4 weeks consist mostly of copying and deleting .kexts and twiddling various settings randomly. Consider it the bar to entry.

So, I made a commitment, that if I ever managed to get OS X working properly on my desktop, I would write a guide that did not suck. So here I am today.

As a precursor, the files on this guide apply specifically to my computer setup. Although the general information about the topic should help others understand what is going on, so they just aren’t poking blindly in the dark when trying to get OS X working.

 

Installing OS X: Two Methods

You can install OS X on Pentium 4, AMD, Core2 and i7 processors, with generally no problems (though I would recommend Intel chips for better support). However, there tend to be some issues with chipsets. For the most part, Intel based motherboards seem to work fine. However, non-standard motherboards, in my case the NVidia nForce 790i SLI, may have issues. (There are currently no working SATA drivers for this board.)

Hardware compatibility is obviously the major hurdle for OS X installations. Macs roll off the assembly line, with mostly the same set of hardware from machine to machine. As such, there’s a very limited set of drivers available for the platform. To work around this, hackers have spent significant time porting and implementing drivers to OS X. The annoying part is finding that special combination of .kexts that work for you and your machine.

Putting the driver issue to the side for now; there are two methods to install OS X:

  • Hacked Installation Disc – There are currently four popular hacked/custom OS X discs: Kalyway, iDeneb, iAtkos, and iPC. These discs come prepackaged a custom installer, sometimes a custom kernel (voodoo), and a variety of drivers. These installation disc torrents can be found on everyone’s favorite torrent site. Because the discs come with basically everything you need on them, and the hard work done, you can more or less just pop it in and get a running system fairly easily. The downside to this method is that since various system files and drivers are modified, Apple Software Update can easily break this installation, turning your computer into a paperweight until the files are restored. This makes it a very brittle installation.
  • Boot-132 – This is a fairly new installation method, and as such, the tools and guide regarding this are not very well developed or explained. But, this method is not as brittle as the aforementioned. Updates from Apple are very unlikely to break your installation.

Boot-132 Explained

Boot-132 works by storing the modified kernel and kernel extensions (aka .kext files aka drivers) on a boot disk. This boot disk starts up and then loads OS X under its modified system environment that is compatible with non Mac hardware. Since the modified environment is contained in the boot loader, this allows users to install OS X from directly from the retail CD [0] and without any modification to the installation itself! This boot disc is commonly a CD or a USB drive. [1]

Recently, a technique has been developed that allows users to store the Boot-132 boot loader and system files in a special partition of a GPT format disk.

Partition Tables (and GPT) Explained

When you partition a hard disk, the disk uses a partition table to contain the disk layout/volume information. The primary partition format found on Windows is Master Boot Record. However, Master Boot record is getting a bit long in the tooth, and the new standard is GUID Partition Table (GPT). This is interesting because GPT partitioned disks must have a 200MB EFI System Partition. This partition contains the disk’s boot loader, and is generally hidden from the user, and currently unused and untouched by most operating systems; as is the case on Mac OS X. This means that system updates will not meddle with and patch your critical system files.

Thus, the EFI partition is the perfect place to store the Boot-132 modified system files.

 

Stay tuned… Building a Hackintosh from L to Z coming soon.

[0] The Boot-132 install must be from a Retail CD. The OEM CDs that ship with MacPro or MacBook are stripped down versions of the OS tailored specifically to that hardware. They will most likely not work if trying to install off of them.

[1] There is a product called EFI-X that claims to magically turn your computer into a Mac by plugging their “BPU” or “Boot Processing Unit” (wow, what a joke) into your USB drive. This, in reality, is simply a USB drive with a Boot-132 loader. Boot-132 is free. A small USB drive goes for around $10. The company sells their BPU to uninformed masses for $340. And it generally doesn’t even work. Thanks to Karma, EFI-X got into some legal hot water and had to shut down their US office.

Continue on to Part 2 of Building a Hackintosh from A to Z.

Mobile Phones – My thoughts on the whole thing (NSFW possibly)

mythoughts

Klaxon for Android – Now a Paid Application on the Market

The new improved Klaxon is now available on the Market for a reasonable $1.99! There have been several bug fixes, as well as some new features:

  • The compass is used to detect flip events. This allows vibration to be used for the alarm as well (it previously used to mess with the accelerometer, making it unusable).
  • Alarms can now optionally vibrate.
  • Alarms are now nameable.
  • Snooze can now be disabled.
  • Alarm will now turn off if the Home button is pressed.
  • Click and hold an alarm on the main screen to delete it.

Expression Trees, dynamic, and Multithreading in C# 3.0/4.0

I just had a brain fart the the other day: the dynamic keyword in C# 4.0 is more than just a way to plug interact dynamic languages. It can be used as a mechanism to evaluate any expression tree at runtime. One such example is that any arbitrary data source can be generically fashioned into dynamic objects that developers can interact with at run time. Take for example the following XML document:

<People>
<Person>
<Name>Koushik</Name>
<Age>27</Age>
</Person>
<Person>
<Name>Bobo</Name>
<Age>12</Age>
</Person>
</People>

Given a proper dynamic binding implementation for XML, the following pseudo code could be used to interact with this document:

XmlDocument doc = new XmlDocument();
doc.Load("file.xml");
dynamic dynamicDoc = DynamicXmlDocument.From(doc);

foreach (dynamic person in dynamicDoc.Person)
{
Console.WriteLine("Name: {0}", person.Name);
// modify the person's age in the document
person.Age++;
}
doc.Save("file.xml");

This approach can be used to access or modify any potential data source, such as a databases, web services, RPC, etc. What would normally be done with code generation tools (like xsd.exe, LINQ to SQL, wsdl.exe, etc) at design time can now be done at runtime. [0]

This train of thought continued until it derailed into another area I find interesting: multithreading (and parallel computing). Consider the following simple program (disregard the [Future] attribute for now, more on that later):

[Future]
static int LongOperation(int result)
{
Thread.Sleep(1000);
return result;
}
static void Main(string[] args)
{
int normalResult = LongOperation(LongOperation(1) + LongOperation(2)) + LongOperation(LongOperation(3) + LongOperation(4));
}

LongOperation is an operation that takes 1 second and returns the value it was passed. It is a pseudo function that simulates an operation that takes a significant amount of time to complete and return a result. Examine the expression that you see above: How long do you think it would take to complete? If you guessed 6 seconds, you guessed correctly.

Those operations are begging to be run asynchronously by way of threads. A friend of mine, Jared Parsons, released his implementation of “Futures” in his BclExtras library, which makes building concurrent operations fairly easy. A Future is an object that wraps a long running operation in a thread, and blocks only when the Value of that operation is necessary. That way, a developer can define several values they will need in the Future, and start computing the results concurrently. For example, to do the above asynchronously using Futures, it would be expressed as follows:

var op1 = Future.Create<int>(() => LongOperation(1));
var op2 = Future.Create<int>(() => LongOperation(2));
var op3 = Future.Create<int>(() => LongOperation(3));
var op4 = Future.Create<int>(() => LongOperation(4));
var outer1 = Future.Create<int>(() => LongOperation(op1.Value + op2.Value));
var outer2 = Future.Create<int>(() => LongOperation(op4.Value + op3.Value));
int futureResult = outer1.Value + outer2.Value;

This implementation, which uses multithreading, will take only 2 seconds to complete (1 second to complete op1-op4 simultaneously, 1 second to compute outer1 and outer2 simultaneously). But it is not very pretty; it is difficult to look at this code and understand what is going on.

And although this is easy enough to write, I had the thought that migrating code towards a concurrent model can be done really easily with “dynamic” types in C# by implementing a dynamic runtime to handle futures. Consider following pseudo code:

public class Foo
{
[Future]
public int LongOperation(int value)
{
Thread.Sleep(1000);
return value;
}
}

dynamic future = new FutureObject(new Foo());
int result = future.LongOperation(future.LongOperation(1) + future.LongOperation(2)) + future.LongOperation(LongOperation(3) + future.LongOperation(4));

This would work because the implementation of the dynamic runtime would create expression trees which handle method calls with the [Future] attribute specially: Instead of calling the method directly, it would create all Future<T> objects up front when the expression is initially evaluated or created. Since I do not have the Visual Studio 2010 CTP, I decided to forego implementing this for the moment (although the implementation of a Future<T> dynamic runtime would provide this nice syntactic support for DLR languages like IronPython and IronRuby).

However, C# 3.0 does have a very powerful Expression support that can be used to achieve similar results. To paraphrase from IanG’s blog post about Expressions:

Consider the following code:

Expression<Func<int, bool>> exprLambda = x => (x & 1) == 0;

This takes that Func delegate, and uses it as the type parameter for a generic type called Expression<T>. It then proceeds to initialize it in exactly the same way, so you'd think it was doing much the same thing. But it turns out that the compiler knows about this Expression<T> type, and behaves differently. Rather than compiling the lambda into IL that evaluates the expression, it generates IL that constructs a tree of objects representing the expression.

This was the point at which I went: "what the?.."

To be more explicit about this, here's roughly what that second line compiles into:

ParameterExpression xParam = Expression.Parameter(typeof(int), "x");
Expression<Func<int, bool>> exprLambda = Expression.Lambda<Func<int, bool>>(
Expression.EQ(
Expression.BitAnd(xParam, Expression.Constant(1)),
Expression.Constant(0)),
xParam);

Basically, given a normal C# expression, we can see the expression tree at runtime! If we walk the expression tree, and modify all the MethodCallExpressions that have the [Future] attribute, we can achieve similar results to the previous solution:

// Notice that I am not changing the contained expression below at all! It is the same as the normal code.
// The futures are created automatically.
int futureResult = FutureExpression.Process<int>(() =>
LongOperation(LongOperation(1) + LongOperation(2)) + LongOperation(LongOperation(3) + LongOperation(4))
);

Remember, the contents of the expression with LongOperation are not actually being executed; only an expression tree for that expression is being created. It is not analyzed until FutureExpression.Process is called with that expression.

This is pretty awesome! Syntactic support for automatic multithreading with little to no code changes:

  • Tag any long running methods with [Future]
  • Wrap your original expression in a FutureExpression.Process

But, there are still an issue with Expressions, in that they are somewhat limited. Expressions that are parseable at compile time must be single statements. For example, the following would not compile:

// does not compile: A lambda expression with a statement body cannot be converted to an expression tree
Expression<Func<int>> expr = () => { if (someValue == 2) return 0; return 3; };

However, it can be rewritten as the following to make it compile:

Expression<Func<int>> expr2 = () => someValue == 2 ? 0 : 3;

 

It is possible to alleviate some of this problem, albeit somewhat sloppily:

  • FutureExpression.Process<T> can return a Future<T> rather than T.
  • Future<T> can implement all the operators to return further Future<T> which executes an expression tree of that operator on T (this would fail if T does not support that operator) .
  • The result of an operator on two Futures is a Future, so the actual result is never analyzed until the Value is explicitly retrieved.

Anyhow, the code samples containing the FutureExpression analyzer is available for download. I may resume working on the dynamic runtime for Futures, as that seems rather interesting. But don’t hold your breath for another post. :)

 

[0] Obviously this would be at the cost of runtime performance. Another issue with dynamic objects is that there is no “strong” typing. This can be addressed by duck typing.

Web 2.0 Is Dead. Long Live Desktop 2.0!

WebKit, Mozilla, and Chrome are seeing some phenomenal gains in the arena of JavaScript and HTML rendering standards and performance. If you bust out your magnifying glass, you will find a one pixel tall bar on those charts: Internet Explorer, the browser with the largest user share, is trailing the pack. And that is causing thousands of tiny violins to play in unison with the cries of web developers claiming that Microsoft is holding back Web 2.0. And that may be true; that is, if Web 2.0 will have HTML, JavaScript, AJAX, CSS, et al championing this so-called revolution. (These technologies will be from here-in referred to as just HTML/JS for brevity)

One of the prominent characteristics of Web 2.0 applications is that many of them are mimicking traditional desktop applications while using the browser as the application platform. The primary benefit to this is that the application is then cross platform, and that your data is available anywhere. A platform agnostic application runtime, eh? That sounds familiar. Essentially, a primary goal of Web 2.0 is to bring web applications to the desktop.

But, HTML/JS was not intended to behave or be used in the manner that it is heading. Every 10 or so years, there have been fundamental changes in application development, and it is beginning to show its age. To paraphrase the past and my vision of the future of development:

1975 – Imperative Programming fundamentals: C, Pascal, Basic

1985 – Object Oriented: C++, Ada

1995 – Rapid Application Development: Java, Visual Basic, Delphi (HTML and JavaScript are born)

2005 – Metaprogramming/Reflection, XML integrated development (LINQ to XML, XAML, XUL): Ruby, .Net, Python

2005 and beyond – Language Agnostic, Platform Agnostic, Web based distribution, Ubiquitous data access

HTML/JS, initially a static markup driven platform, is trying to morph into a desktop application platform tied to the web: the current/next application development platform. I think its time to put that horse down. And apparently so does Microsoft (Silverlight) and Adobe (Air). Rather than try to bring the web to the desktop, they’re approaching it from another angle: bring the desktop to the web. I will dub this movement Desktop 2.0. [0]

 

My blatant ageism aside, what is wrong with HTML/JS?

1) Everything you write is automatically Open Source

Your offline enabled application will have the entirety of it’s HTML/JS source code available. Thousands of man hours of a company’s IP can be easily inspected and reproduced. And though this concept would probably make GNU Richard Stallman shit his diapers with glee, 99.9% of the time, open source models are not profitable. Money makes the world go around.

2) Performance Blows

JavaScript performance will always and forever suck compared to languages that support strict typing.

3) No Developer Freedom

If you took every bad thing about C and every bad thing about Visual Basic and added in a few more bad things, it would be almost as bad as Javascript.” Developers should be able to do application and web development in any language of their choice. Being restricted to an eternity of JavaScript development is sort of how I imagine the ninth circle of hell. [1]

4) HTML has no Sex Appeal

It’s just too hard to make HTML sexy. Don’t even get me started on animating said HTML. And yes I know about how awesome Canvas is supposed to be. So following that to its logical end, web pages will eventually just become a HTML hosting a single canvas:

<html>
<head>
<script type="text/javascript">
function drawStuffOntoCanvas()
{
}
</script>
</head>
<body>
<div>
<canvas id=”the_canvas”></canvas>
</div>
</body>
</html>

How is that any different from HTML hosting a single Silverlight or Flash object?

<html>
<body>
<div id="SilverlightHost">
<script type="text/javascript">
createSilverlight("/pathToSilverlight/Page.xaml", "SilverlightHost", "Silverlight Application");
</script>
</div>
</body>
</html>

Well, for starters, Canvas is not markup driven like XAML/MXML. Why is HTML 5 taking a step backwards in UI development? Canvas is a flimsy JavaScript Band-Aid on the bullet wound that is HTML. [2]

 

To summarize, I can’t imagine any desktop applications of real significance being created with the existing web technologies. Imagine the best attempt of creating a Web 2.0 version of Microsoft Office in HTML/JS. Actually, you don’t need to imagine. Just visit Google Docs. Does anyone actually use that to do real work? The applications themselves are extremely crippled in comparison to their desktop counterparts. The only value it really provides over Microsoft Office is ubiquitous data access. And sadly, I’d rather use Sharepoint, which is no spring chicken either. [3]

Furthermore, as the line between the Web and the Desktop is blurred, you will see developers from one camp bleeding into the other. They’ll be presented with two technologies that are trying to achieve the same thing. Learning HTML/JS allows developers to build great web sites, but clumsy desktop applications. Learning a “Desktop 2.0” technology will allow that developer to build better web sites as well as great desktop applications. What do you think they’ll choose?

 

[0] Crap, it looks like someone else beat me to it. With a similar article title and thesis to boot. WTF!

[1] Why is it called JavaScript anyways? About the only thing Java and JavaScript have in common anymore is how much they suck.

[2] Someday, someone, somewhere will invent a XML markup that can be converted at runtime into Canvas JavaScript which is then eval’d and consumed by web pages. Files that contain the Canvas markup will have the “.cnvs” file extension. Usage of CNVS will become widespread and ECMA will standardize this format. Soon everyone will write their web pages in entirely CNVS, and browsers will forego the necessity of the HTML host. HTML slowly withers away. </jest>

[3] Live Mesh would also allow you to access your documents from anywhere, seamlessly. I actually prefer that to Sharepoint. It’s pretty awesome, truth be told.

Push Notification to Mobile Devices via SMS – Part 2

To implement push, let’s start by sending the SMS that will be the notification that the mobile device will receive. To do that, we will use the free SMS service from ZeepMobile. Doing a search for “ZeepMobile C#” results in a match to AboutDev’s Weblog. He has already done all the heavy lifting of writing a managed wrapper around ZeepMobile’s SMS API. I rearranged the code slightly so it is easily reused.

Here’s the code sample from my ZeepMobile managed API:

ZeepMobileClient myClient = new ZeepMobileClient(API_KEY, SECRET_ACCESS_KEY);

private void mySendButton_Click(object sender, EventArgs e)
{
if (string.IsNullOrEmpty(myUser.Text) || string.IsNullOrEmpty(myMessage.Text))
{
MessageBox.Show("Please enter a user and message");
return;
}
myClient.SendSMS(myUser.Text, myMessage.Text);
}

Ok! Now, let’s hook up the SMS notification to my blog. The iframe below is hooked up to a ASP .NET application that uses the ZeepMobile managed API to subscribe a user and allow them to send SMS to themselves. First enter a user name to register on this blog, then follow the steps that show up to allow this site to send you SMS through ZeepMobile:

You should be able to use this site to send SMS to yourself. Next step, intercept the SMS on your phone and handle it. More to come in a later post!

Click here for the code used in this tutorial. The code sample includes a standalone Windows Application that allows you to send SMS to your phone (once you get a free API key with ZeepMobile).

“Push” Notification to Mobile Devices via SMS

One of the questions I seem to get on a frequent basis is “How is push notification implemented?”. Push notification is extremely useful for mobile devices; whenever there is a power constraint, typical polling/pull approaches will tax the device’s battery. Some examples of Push vs. Pull:

Push

  1. ActiveSync
  2. Gmail on Android

Pull

  1. IMAP/POP3 email (user configurable polling interval)
  2. Twitter Clients (which checks Twitter periodically)

The obvious problem with polling cycles and pull is that the phone needs to come out of sleep/power save periodically and poll the server for messages: and there may be none waiting! For example, if you set a 5 minute email check interval on your Windows Mobile phone’s IMAP/POP3 account, your battery will be dead before the end of the day.

As far as I know, there are two ways to implement Push notification to a mobile device:

  1. Persistent TCP/IP Connection: Ideally the server would initiate the connection to the phone, but most phones do not have a static IP address available to them. Thus, generally, the phone initiates and maintains the connection. I know that actually maintaining a connection indefinitely will also drain the phone’s battery quite quickly due to a constant stream of keep-alive pulses. I’m a little hazy on the details here, but I have read that ActiveSync actually utilizes connections with a 15 to 30 minute time out to get around this problem.
  2. SMS: This is the ideal way to implement push, as the client only gets notified via SMS when there is something new on the server. Beware potential SMS costs that may be incurred.

 

Setting up SMS Interceptors

So, how does a developer use SMS to to implement a custom push solution? Well, first, the phone needs to set up an SMS interceptor: you don’t want to be sending protocol related SMS to the user’s Inbox!

Android: Register a receiver that watches for the android.provider.Telephony.SMS_RECEIVED broadcast.

Windows Mobile: Set up an Application Launcher for SMS events.

 

By way of SMS message prefixes, a developer can the specific SMS that are meant to be handled by the application. For example, a sample SMS in my custom protocol could be:

Koush: Poll!

By filtering SMS that begin with “Koush:” as they arrive, the user will never see them, and my application can utilize them to implement Push notification.

 

Sending an SMS to a Phone

There are several web services that allow you to send an SMS to a phone. Some free, some not. In the free category, I would recommend trying ZeepMobile. It’s easy to set up, and the API is straightforward.

 

Code sample and demo coming in a later post!

Samsung Mobile Innovator: A Step in the Right Direction

Windows Mobile developers that have used the Windows Mobile Unified Sensor API may know that painful reverse engineering process took place to add support for Samsung and HTC phones. This is because no phone manufacturer has released the internal API specifications for their device sensors!

However, Samsung recently released the SDK for their Light Sensor and Accelerometer on their new Samsung Mobile Innovator website. I don’t have a Samsung phone available at the moment, but the SDK looks pretty straightforward! And although it would have been ideal if Samsung had officially supported the Sensor API, this is a step in the right direction! A published API is better than no API. Hopefully HTC follows suit!

Here’s a snapshot of the managed bindings that were included with some of the SDK sample code:

namespace SamsungMobileSdk
{
    public class Accelerometer
    {
        public struct Vector
        {
            public float x;
            public float y;
            public float z;
        }

        public struct Capabilities
        {
            public uint callbackPeriod; // in milliseconds
        }

        public delegate void EventHandler(Vector v);

        [DllImport(Shared.SamsungMobileSDKDllName, EntryPoint = "SmiAccelerometerGetVector")]
        public static extern SmiResultCode GetVector(ref Vector accel);

        [DllImport(Shared.SamsungMobileSDKDllName, EntryPoint = "SmiAccelerometerGetCapabilities")]
        public static extern SmiResultCode GetCapabilities(ref Capabilities cap);

        [DllImport(Shared.SamsungMobileSDKDllName, EntryPoint = "SmiAccelerometerRegisterHandler")]
        public static extern SmiResultCode RegisterHandler(uint period, EventHandler handler);

        [DllImport(Shared.SamsungMobileSDKDllName, EntryPoint = "SmiAccelerometerUnregisterHandler")]
        public static extern SmiResultCode UnregisterHandler();
    }
}

I’ll be adding support for the official Samsung API to the Sensor API shortly.

Back from Hawaii!

The deafening silence for the past week on my blog was due to my being in Maui! It was a great trip overall.

High point: I swam with a 4 foot long sea turtle.

Low point: I let someone convince me that the Road To Hana was a good idea. I am pretty sure that Road To Hana is actually the largest man made fractal. Here’s my artistic rendition:

hana

The scenery and such is unbelievably beautiful, but not even that can offset the nausea that sets in after the 300th hairpin turns on the highway. Here’s an actual satellite image of a typical 1 mile stretch of road along Hana (tile this image 40 times for actual representation):

hanareal

Learn Something New Every Day...

Recently I started jotting down anything new that made me say "oh cool!". Some are rarely used C# language features, some are API mechanics, and some are just random trivia. Here's what I've come up with so far:

C#'s stackalloc Keyword

stackalloc is basically handy way to allocate an array on the stack and can provide several conveniences.

Instead of allocating a locally scoped array (which is backed by the heap and managed by the garbage collector), you can get better "bare metal" performance by using stackalloc:

float SomeFunction(float[] input)
{
float dataResults;
float[] data = new float[3];
// we are calculating a result from input and storing interim results in data
return dataResults;
}

At the end of this method, a float array is sitting around waiting to be garbage collected. Instead, that function can become:

unsafe float SomeFunction(float[] input)
{
float dataResults;
float* data = stackalloc float[3];
// do something with the data and input
return dataResults;
}

In addition to taking the garbage collector and heap allocation out of the equation to eek out a little more performance, you also get a native float pointer. This can make interop with unmanaged code quite friendly, rather than a giant mess of fixed statements that would come about as a result of using float arrays.

 

C#'s ?? operator

Consider the following method:

Foo EnsureNotNull(Foo foo)
{
if (foo != null)
return foo;
return new Foo();
}

You can trim that down a bit by using a ternary operator:

Foo EnsureNotNull(Foo foo)
{
return foo == null ? new Foo() : foo;
}

But why use that when ?? is a binary null check operator; I.e., it returns the left if it is not null and the right otherwise:

Foo EnsureNotNull(Foo foo)
{
return foo ?? new Foo();
}

 

Hardware Graphics Acceleration and Asynchronous Calls

This is a tip I learned way back in my hay day of amateur game development. Many OpenGL/Direct3D may be just asynchronous calls on the hardware, and subsequent operations may end up resulting in them waiting for the previous operation to finish. The following is a very common pattern in drawing routines:

void OnPaint()
{
glClear(GL_COLOR_BUFFER_BIT);
// paint a bunch of stuff
// glDrawStuff();
glSwapBuffers();
}

Here, the code is immediately making paint operations following the clear. This results in those operations waiting until the clear has completed before executing. Restructuring the code as follows can squeeze out a little more performance:

void OnPaint()
{
// paint a bunch of stuff
// glDrawStuff();
glSwapBuffers();
// clear the buffer after the background and foreground are swapped
// this clear will take place asynchronously and be complete
// when we start to draw the next frame!
glClear(GL_COLOR_BUFFER_BIT);
}

I implemented this in GLMaps recently, and saw an FPS increase from 49 to 51. That's a ~4% increase! Using this technique, any static preprocessing/setup can actually just be run at the end of the drawing operations, rather than at the beginning. (Concurrently using the CPU and GPU between frames to get a net gain in FPS)

 

Handy C# 3.0 Shorthand

This tip didn't actually catch me by surprise, since I read through the C# 3.0 new features. However, old habits die hard, so I often forget to use them. Prior to C# 3.0, the following would be a standard class declaration:

class Foo
{
int myBar;
public int Bar
{
get
{
return myBar;
}
set
{
myBar = value;
}
}

double myMoo;
public double Moo
{
get
{
return myMoo;
}
private set
{
myMoo = value;
}
}

float myCool;
private float Cool
{
get
{
return myCool;
}
set
{
myCool = value;
}
}
}

With C# class declaration shorthand, developers can now eliminate much of the redundancy (the properties and backing fields are implicitly defined):

class Foo
{
public int Bar
{
get;
set;
}

public double Moo
{
get;
private set;
}

public float Cool
{
get;
set;
}
}

Also, when declaring an instance of that class, prior to C# 3.0, a common pattern is to set some initial properties:

void MakeFoo()
{
Foo foo = new Foo();
foo.Bar = 0;
foo.Cool = 0;
}

With C# 3.0, you get additional shorthand (less characters, not lines), and a more aesthetically pleasing syntax:

void MakeFoo()
{
Foo foo = new Foo()
{
Bar = 0,
Cool = 0
};
}

 

That's it for tips for now. Share 'em if you got 'em!

Blacklisted by Google's Geolocation Service

banned

Several months ago, I described how one could use Google Gears to easily add Geolocation and GPS to their Windows Mobile applications. Basically, this is a trick that would work with any embedded browser by adding and scripting a DOM element, or by using my unsavory window.location hack when that is not possible (as is in the case of Pocket IE).

I was recently trying to reuse this code, and I noticed that the Geolocation sample page I had hosted was not working on Windows Mobile anymore; the service seemed to be returning an invalid response. I navigated to that same URL in Google Chrome and Firefox and it worked fine. I then tried navigating to the Gears Geolocation sample in Pocket IE, and that worked as well. I verified that I still had the most up to date code hosted on my domain. Still no luck, as the code had not changed.

So out of curiosity, I hosted the geolocation page on a separate domain and navigated to it within Pocket IE. 'Lo and behold, it starts working. So mentally, I constructed this chart in my mind:

  Pocket IE Google Chrome Firefox
code.google.com Works Works Works
koushikdutta.blurryfox.com Broken! Works Works
supersecretdomain.com Works Works Works

My conclusion? Google has blacklisted my domain! And, to be honest, I'm not really sure why. I'm using their sample code on my domain to get geolocation data, but using the results outside of the context of the browser.

Congratulations Google, you have earned the first honorary YouBastard tag on my blog!

 

Edit:

Google Gears EULA. I haven't violated it as far as I can see, but they do reserve the right to cut off access to the service at their whim.

ActiveSync? On GMail? It's More Likely Than You Think.

200px-Snakesonmyplane

Chances are, most Windows Mobile/iPhone user's don't have an Exchange Server sitting around for their own personal use. Well those people are now in luck! Google has released a new service called Google Sync that uses the ActiveSync protocol to deliver your Mail and Contacts from their services to your phone. This also works with Google Apps hosted domains! This is pretty awesome. Embrace, extend, extinguish?

WindowlessControls Tutorial - Part 4: Data Binding (continued)

contactlist

After rereading my previous post about data binding, I don't feel that my sample does justice to the overall flexibility/utility of WindowlessControls. So, I decided to write another sample: a template for a Windows Mobile contact list application.

Let's start out by deciding the basic behavior of the Contact List:

  • The contact list should show the name on the left of the item, and the contacts number below it.
  • The contact's picture will be to the right of the name and number.
  • When the contact is selected, it should expand and show the street address and email address.

Like the previous example, we populate our ItemsControl with the data we want to bind. To get the list of Pocket Outlook contacts, access the handy OutlookSession class:

myItemsControl.ContentPresenter = typeof(ContactPresenter);
myItemsControl.Control = new StackPanel();

OutlookSession session = new OutlookSession();
foreach (Contact contact in session.Contacts.Items)
{
if (contact.Picture != null)
myItemsControl.Items.Add(contact);
}

Now, let's set up the IInteractiveContentPresenter that will represent a contact. This ContactPresenter is a bit more complex than the previous Image Gallery example:

public class ContactPresenter : StackPanel, IInteractiveContentPresenter
{
// selection state rectangle
WindowlessRectangle myRectangle = new WindowlessRectangle();
// contact picture
WindowlessImage myImage = new WindowlessImage();

// obvious properties
WindowlessLabel myName = new WindowlessLabel(string.Empty, WindowlessLabel.DefaultBoldFont);
WindowlessLabel myNumber = new WindowlessLabel();
WindowlessLabel myAddress = new WindowlessLabel();
WindowlessLabel myEmail = new WindowlessLabel();
// this panel will house the email and address, and only be visible when focused
StackPanel myExtendedInfo = new StackPanel();

public ContactPresenter()
{
OverlayPanel overlay = new OverlayPanel();

// stretch to fit
overlay.HorizontalAlignment = WindowlessControls.HorizontalAlignment.Stretch;
overlay.VerticalAlignment = VerticalAlignment.Stretch;

// dock the picture to the right and the info to the left
DockPanel dock = new DockPanel();
StackPanel left = new StackPanel();
left.Layout = new DockLayout(new LayoutMeasurement(0, LayoutUnit.Star), DockStyle.Left);
myImage.Layout = new DockLayout(new LayoutMeasurement(0, LayoutUnit.Star), DockStyle.Right);
myImage.MaxHeight = 100;
myImage.MaxWidth = 100;

dock.Controls.Add(myImage);
dock.Controls.Add(left);
// 5 pixel border around the item contents
dock.Margin = new Thickness(5, 5, 5, 5);

// make the overlay fit the dock, so as to limit the size of the selection rectangle
overlay.FitWidthControl = overlay.FitHeightControl = dock;

// set up the rectangle color and make it fill the region
myRectangle.Color = SystemColors.Highlight;
myRectangle.HorizontalAlignment = WindowlessControls.HorizontalAlignment.Stretch;
myRectangle.VerticalAlignment = VerticalAlignment.Stretch;
// the rectangle does not paint by default
myRectangle.PaintSelf = false;

// add the extended info that is only visible when focused
StackPanel nameAndNumber = new StackPanel();
nameAndNumber.Controls.Add(myName);
nameAndNumber.Controls.Add(myNumber);
myExtendedInfo.Visible = false;
myExtendedInfo.Controls.Add(myEmail);
myExtendedInfo.Controls.Add(myAddress);

// set up the left side
left.Controls.Add(nameAndNumber);
left.Controls.Add(myExtendedInfo);

// add the foreground and the background selection
overlay.Controls.Add(myRectangle);
overlay.Controls.Add(dock);

// add the item
Controls.Add(overlay);

// this is the bottom border
WindowlessRectangle bottomBorder = new WindowlessRectangle(Int32.MaxValue, 1, Color.Gray);
Controls.Add(bottomBorder);
}

#region IInteractiveStyleControl Members

public void ApplyFocusedStyle()
{
// make the rectangle paint to denote that it is focused
myRectangle.PaintSelf = true;
myExtendedInfo.Visible = true;
}

public void ApplyClickedStyle()
{
}

#endregion

#region IContentPresenter Members

Contact myContact;
public object Content
{
get
{
return myContact;
}
set
{
// populate the various UI elements with the contact
myContact = value as Contact;
if (myContact.Picture != null)
myImage.Bitmap = new StandardBitmap(new Bitmap(myContact.Picture));

myName.Text = myContact.FileAs;
if (!string.IsNullOrEmpty(myContact.MobileTelephoneNumber))
myNumber.Text = myContact.MobileTelephoneNumber;
else
myNumber.Text = myContact.HomeTelephoneNumber;

myAddress.Text = string.Format("{0}\n{1} {2} {3}", myContact.HomeAddressStreet, myContact.HomeAddressCity, myContact.HomeAddressState, myContact.HomeAddressPostalCode);
myEmail.Text = myContact.Email1Address;
}
}

#endregion
}

And the results of this tutorial can be seen in the video below.

This took me about 30 minutes to whip together, with a bit more time, this could easily be turned into a real Contact List application! Click here for the updated source code to this tutorial.