Saturday, June 9, 2012

Rooting your Droid Incredible

I have an old Droid Incredible that I'm hanging onto until the iPhone 5 comes out. In the meantime, I've been fighting with the "Phone storage space is getting low" issue. Like many people seeing this problem, I have plenty of free space available. I was able to fix it by rooting my phone and running a custom ROM to repartition the phone's storage. Here is how I did it.

Disclaimer

Rooting your phone may void your warranty. Back up your data before doing this. Also know that I haven't been terribly rigorous in understanding everything that is happening here. You could say that I am cargo cult-ing some of the instructions. For me, since I am due for a phone upgrade, the worst case scenario is that if I brick my phone, I'll head to the Apple store and pick up a 4S.

Overview

What this will do is give you root access to the phone. This will allow you to run custom ROMs. Once you have this, you can replace everything on the phone with something like CyanogenMod. For the time being, I have chosen to stick with the OS shipped on the phone but booted into a custom ROM to fix the issues I am seeing.

The "Phone storage space is getting low" message is caused because application files and some data is stored on a partition that is 150MB. It's mounted to /data/data. Once your apps or that data hist 140MB, you'll get the error and your phone will shut down sync services, meaning you won't see new emails. I found a ROM that repartitions this space to 750MB and haven't had a problem since (roughly a month).

Gaining Root

I use a Mac at home and used Unrevoked 3 to gain root access. The latest version of the software (3.32) did not work in attempting to gain root. I got an error message stating that the firmware was too new. I was able to pull down version 3.22 by changing the download URL. That worked fine. You'll need root access to boot the phone into custom ROMs. You'll need custom ROMs to do the repartitioning.

Repartitioning

I bought a copy of ROM Manager from the Play Store. This will let you boot custom ROMs. I used it to boot the Convert2Ext4_no_data_limit_normal_dalvik ROM. You can find a description of what the ROM does and other variants of it in this XDA Developers forum post. You will need to move the ROM image to your phone to boot it. I just downloaded it to my Mac, mounted the phone's SD card as a disk and copied it over.

Success

After you run the custom ROM, your phone should now be repartitioned and the low storage space error message should go away. With root, you should be able to do other nice things like remove Verizon bloatware. I haven't tried that yet.

Text Messaging

After rooting and repartitioning the phone, I noticed that I could send text messages, but I couldn't receive them. This seems to be a relatively common problem among people who do this sort of thing. After some Googling, I came across this post in the Verizon community forums. Basically, if you go to http://dl3.htc.com/misc/inc8049.apk on your phone, it fixes the issue and you can get text messages again. I have no idea what the file does, so download at your own risk. It goes to HTC, so I figured it was relatively safe.

Wednesday, December 29, 2010

Debugging MapReduce in MongoDB

On a project that I am working on, we are doing some pretty intense MapReduce work inside of MongoDB. One of the things we've run up against is the lack of solid debugging tools. Some Googling basically tells you that print() is all you've got.

We've decided to take a different approach and debug our MapReduce code in the browser. Since the code is JavaScript and modern browsers have really excellent support for debugging (breakpoints, variable inspection, etc.) it's pretty easy to do.

All you need is a web app (or even static HTML file) that will:

  1. Load up one of your documents that you would like to map in the browser. Since the documents are JSON, this is easy. In our project, we have JSON fixture files and a small web app that allows you to choose which fixture to use for testing.
  2. Mock the emit() method. You can just have it write to a Hash that you can inspect later.
  3. Load up the Map and Reduce functions. If you keep these in separate .js files, you can pull them in with a simple script tag.
  4. Bind the map function to the document so that it has the correct context. In a MongoDB mapper function "this" is set to the document that you are mapping. You can easily do this with the bind() function in Underscore.js. I'm sure that other JavaScript frameworks provide a similar function.
  5. Put a link on the page that will let you run the bound function.
This will emulate the MongoDB MapReduce environment, but you can now use the browser's debugging tools.

Tuesday, December 14, 2010

Using Underscore.js with MongoDB

I've been using MongoDB for a while now and have been really happy with it. I wanted to share something we are doing on of the projects I work on that makes working with Mongo even better.

MongoDB allows for the use of JavaScript to do lots of work on the server side. This includes running MapReduce jobs on collections, but also can be used in where clauses and for doing grouping. Being able to use JavaScript for these things is handy, but using just the core JavaScript language can be less than ideal. That's why we prime our MongoDB environment with Underscore.js.

On the Underscore website, it claims to be a JavaScript utility belt. I've found that to be the case. It has functions like any or include that save you the trouble of having to write for loops to iterate over arrays. While the MongoDB documentation describes how you can store individual functions for server side use, it didn't really touch on how you could load an entire library like Underscore.

It turns out you can load up libraries like this pretty easily using db.eval(). I recall reading (but can't currently find the docs to prove this) that every MongoDB connection has a JavaScript context associated with it. If you create functions in this context, they will exist as long as the connection is around. So if you just eval the Underscore.js library before you do any work with your connection, you will have access to all of its functions to do your work.

Here is an example of how to use Underscore.js with the Ruby driver. In this example, I'll set up the MongoDB connection with Underscore.js, create a sample dataset of cars, then use Underscore to group them by make without repeating model.




The only downside to this approach is that db.eval does not seem to work with sharding. That is OK for me right now, but YMMV. Also note that I am using the awesome_print gem to pretty print the results.

Wednesday, September 15, 2010

Git rm may cause insanity

Ran across this today, and wanted to help others avoid the same fate. If you use git rm to remove the last file in a directory, it will remove the directory as well. If you are in that directory, odd things can happen that will potentially drive you insane.

Let's create a git repository with a folder that has a single file in it:
$ cd /tmp
$ mkdir foo
$ cd foo/
$ git init
Initialized empty Git repository in /private/tmp/foo/.git/
$ mkdir bar
$ cd bar/
$ echo 'hello' > splat.txt
$ git add splat.txt
$ git ci -m 'adding a text file'
[master (root-commit) 069f11b] adding a text file
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 bar/splat.txt

So I now have a git repository and placed the file bar/splat.txt under revision control. Now if I do:
$ git rm splat.txt
rm 'bar/splat.txt'
This will not only remove splat.txt, but it will remove the whole bar directory. I say this will drive you insane, because if you try to move or copy a file into your current directory, you'll get an error that will probably catch you off guard. Like:
$ cp ~/.gitignore .
cp: ./.gitignore: No such file or directory
There is a file called .gitignore in my home directory, it's just that my current directory no longer exists. It took me about five minutes to realize what was going on... and I was starting to wonder if I knew how to use the cp command.

The reason I ran into this is that I was rearranging my .vim folder to use pathogen. I keep all of my dot files under source control and stumbled upon this while clearing out my vim autoload folder.

Monday, May 17, 2010

Gwibber on Ubuntu 10.04 issues with FiOS

I just upgraded one of my machines to the latest and greatest Ubuntu because I plan on taking it on the road later this week. After I got everything set up, I fired up Gwibber, my favorite Twitter client on Linux. Immediately, I started running into problems. I couldn't get Gwibber to load any new tweets. There seem to be several people who are experiencing this issue with Gwibber, but their troubles are related to the language settings. That was not the case for me.

I did some digging by firing up Gwibber in a terminal:


$> gwibber-service -o -d



This is what I got:





Gwibber wasn't refreshing because it was timing out on the DNS lookup. I have Verizon FiOS as an ISP. FiOS having terrible DNS seems to be a problem. I switched over to Google DNS and every thing is snappy and working properly. If you're using Gwibber on FiOS and having issues, try this out. It may save you an hour or three.

Gwibber, or whatever library it is using for network communication, picked a pretty short timeout, but this is pretty lame. Verizon really needs to step it up here. People will perceive FiOS as slow because it takes forever to look up an IP, even though the network is pretty quick in my experience.

Thursday, April 1, 2010

Using xargs with git

Sometimes, when I'm working on a project, I'll create a bunch of new files and realize that I have a ton of untracked stuff that I need to add to my git repository. Since generally only use git on the command line, it would be painful to copy and paste all of the untracked file names from the output of git status into separate git add commands.

The two commands I have found handy for dealing with this situation are git ls-files and xargs.

If you run the command:
git ls-files -o
It will show you all of the untracked files in your working directory, one file per line. A problem that you will run into here is that it also shows files in your .gitignore. To get around this issue, you just need another argument to specify your .gitignore:
git ls-files -o --exclude-per-directory=.gitignore
Now that you have all of the files you want to add, you just need to run git add on all of them. This is where xargs comes in handy. It will read from standard in, break it up on line endings and then feed each line as an argument into another command. Putting it all together, you get:
git ls-files -o --exclude-per-directory=.gitignore | xargs git add
That last command will add any untracked files to your git repository. The beautiful thing here is that we can also leverage some UNIX-y goodness if we want to as well. Let's say we're working on a project and we only want to commit some XSLT we have been working on. You can do this by throwing grep into the command chain:
git ls-files -o --exclude-per-directory=.gitignore | grep xslt | xargs git add
This will only add files that contain "xslt" in their names. This same approach comes in handy when you remove files from your working copy but forget to run git rm.

Monday, November 23, 2009

Classy hData

I've been working on a team that is looking at ways in which we can simplify the exchange of information in Health IT. This effort is called hData. We just released a new version of our packaging and network transport spec, and I would like to talk a bit about how we arrived at this version.

I think it is really important for IT specifications to have a reference implementation available. If you build a spec without code, it's really hard to see where you have gone wrong. To make sure we are on the right track, I built a small web application that implements the spec. I was able to quickly uncover some bugs in our work. Bugs I'm sure we would have missed by just reading the document.

Technology Choices

Since my preferred language of choice is Ruby, it would be natural to think I would want to tackle this project in Rails. However, in hData we make some good use of the HTTP Verbs, and I'm not so sure that they would line up seamlessly with Rails conventions. I decided to go with a much simpler choice. Sinatra is a small web framework that seems perfect for this job. It makes the HTTP Verbs central to your code, so it should be fairly obvious on how we go from the spec to implementation.

There are a few other tools that I used on this adventure. DataMapper was just right for the ORM needs of the project. I could have used ActiveRecord to persist data, but DataMapper has a really nice auto-migration feature, which will save me from writing all of the database creation code. I also used Bundler to manage my application's dependencies.

Getting Started

The best way to get started here is by taking a test driven approach to the spec. For that I will be using Shoulda and Rack Test. With my TDD tools in place, I can take part of the spec that looks like this:


3.1.2 POST

3.1.2.1 Parameters: type, typeId, requirement

For this operation, the value of type MUST equal "extension". The typeId MUST be a URI string that represents a type of section document. The requirement parameter MUST be either "optional" or "mandatory". If any parameters are incorrect or not existent, the server MUST return a status code of 400.

If the system supports the extension identified by the typeId URI string, this operation will modify the extensions node in the root document and add this extension with the requirement level identified by the requirement parameter. The server MUST return a 201 status code.

If the system does not support the extension, it MUST not accept the extension if the requirement parameter is "mandatory" and return a status code of 409. If the requirement is "optional" the server MAY accept the operation, update the root document and send a status code of 201.

Status Code: 201, 400, 409

and turn it into the a Shoulda context block. In the spec above, we're talking about what should happen when you POST to the root of an hData Record. The functionality being described here is how an extension can be added to the record, or how you can register a different type of thing for a record. For example, you could use this feature to add an medications extension to a record, if one did not exist there already. In our test code, we're going to try to register an allergies extension:

As you can see from the code, the combination of Shoulda and Rack Test make it really easy to express the requirements set forth in the specification. The first test tries to POST and incomplete request and should receive an error. The second sends a properly formed request and should get an appropriate response. The last test tries to POST a duplicate extension.

With the tests in place, we can move on to implementation.

I have created a DataMapper Resource to capture all of the information we want to store about an extension. I will also use the validation framework of DataMapper to make sure that all of the requirements for an extension are met. I end up with the resulting code:


With my model in place, I can implement the code to handle the web request:


The code above is pretty typical for Sinatra. The post block handles POST's to the root URL. There I call a method to check and make sure that the type parameter is set. If it isn't I halt the processing and let the user know that the request is malformed with a 400 code. If the type is set to extension, then we drop into the handle_extension method. Inside of the method, I build an Extension object and check it using the DataMapper validation framework.

There is a little bit of funkiness at the end of the handle_extension method where I need to check the type of error. This is due to the fact that I need to return different status codes depending on the error. Unfortunately, with the DataMapper validations, I didn't see any way to return anything with the errors other than a text message, so this seemed like the best way of doing things.

The handle_section at the end of the post block handles another part of the spec. Don't worry, I didn't write it until I had the tests done first.

Lather, Rinse, Repeat

Implementing the rest of the hData Packaging and Transport spec followed the same process. Take the spec and write a matching unit test. Implement the spec and refine the code until the test passed.

In doing this, I found a couple of bugs in our spec. We hadn't provided parameter names for POSTing section documents. Our description of how to add metadata to documents was ambiguous at best. The nice part was that I was able to discover these things before even digging into the implementation.

What still needs to be done

While the Sinatra app that I wrote is a pretty good implementation of the hData Packaging and Transport spec, it still has some gaps. It doesn't support POSTing metadata on documents, it only creates and serves it's own. It also doesn't support nested sections, but that shouldn't be too hard to add.

Wrap Up

You can find the code at eedrummer/classy-hdata on github. Even if you aren't interested in hData, this application should serve as an example Sinatra/DataMapper application. If you dig into the code and the hData spec, I think you'll see that hData is really easy to implement, especially in a classy framework like Sinatra.