How to Improve Your Tests by Being an Evil Coder

Note: this article assumes you’re somewhat familiar with the idea of Test-Driven Development.

Automated tests improve (minimally) the quality of your code by revealing some of its defects. If one of your tests fails, in theory this points to a defect in your code. You make a fix, the test passes, and the quality of your software has improved by some small amount as a result.

Another way to think about this is that the tests apply evolutionary selection pressure to your code. Your software needs to continually adapt to the harsh and changing conditions imposed by your test suite. Versions of the code that don’t pass the selection criteria don’t survive (read: make it into production).

There’s something missing from this picture though. So far, the selection pressure only applies in one direction: from the tests onto the production code. What about the tests themselves? Chances are, they have defects of their own, just like any other code. Not to mention the possibility of big gaps in the business requirements they cover. What, if anything, keeps the tests up-to-scratch?

If tests are actually an important tool for maintaining code quality, then this is an important question to get right. Low-quality tests can’t be expected to bring about higher quality software. In order to extract the most value out of automated tests, we need a way to keep them up to a high standard.

What could provide this corrective feedback? You could write tests for your original tests. But this quickly leads to an infinite regress. Now you need tests for those tests, and tests for those tests, and so on, for all eternity.

What if the production code itself could somehow apply selection pressure back onto the tests? What if you could set up an adversarial process, where the tests force the production code to improve and the production code, in turn, forces the tests to improve? This avoids the infinite regress problem.

It turns out this kind of thing is built into the TDD process. Here are the 3 laws of TDD:

  1. You must write a failing test before you write any production code.
  2. You must not write more of a test than is sufficient to fail, or fail to compile.
  3. You must not write more production code than is sufficient to make the currently failing test pass (emphasis mine).

It’s following rule 3 that applies selection pressure back onto the tests. By only writing the bare minimum code in order to make a test pass, you’re forced to write another test to show that your code is actually half-baked. You then write just enough production code in order to address the newly failing test, and so on. It’s a positive feedback loop.

You end up jumping between two roles that are pitted against each other: the laziest developer on the planet and a test engineer who is constantly trying to show the developer up with failing tests.

Another benefit to being lazy is that it produces lean code. At some point, there are no more tests to write; you’ve implemented the complete specification as it’s currently understood. When this happens, you will often find that you’ve written far less code than expected. This is a win because all else being equal, less code is easier to understand.

Reading about this is one thing, but it needs to be tried out to really grasp its benefits. It turns out there is an exercise/game called Evil Coder that was created to practise this part of TDD. You pair up with another developer, with one person writing tests and the other taking the evil coder role:

Evil mute A/B pairing: Pairs are not allowed to talk. One person writes tests. The other person is a “lazy evil” programmer who writes the minimal code to pass the tests (but the code doesn’t need to actually implement the specification).

You can try this out by heading along to the next Global Day of Code Retreat event in your city – they are a lot of fun.

TL;DR: Improve your tests and your production code as a result, by being lazy and evil.


Thanks to Ali and Xiao for proofreading and providing feedback on a draft of this essay.

“Business needs vs. Customer needs” is a False Dichotomy

“We have to balance the customer’s needs with the business needs”.

How many times have you heard this while working in a software development team?

I’ve worked as a mobile developer at a number of large companies. In enterprise environments like these, typically the mobile app is “the storefront of the business”, and brings together a number of features paid for by other departments.

Often the initial requirements from the other department will come with a suggestion to make their feature more prominent in the app. For example, “add it to the top of the dashboard”, “just add a new tab for it” or “send a push notification to our users about it”.

This is understandable. The job of the people from the other department is firstly to improve the area of the business they are responsible for. Their job is not to work out how to nicely integrate their feature into the app so it plays nicely with every other feature. That’s the app team’s job.

When members of the app team point out that adding a new top-level tab or push-notification for every new feature requested by every department isn’t a sustainable long-term strategy, and will lead to a poor user experience, the protest that often comes back is something like:

Well, we have to remember to balance the customer’s needs with the business needs.

I was never comfortable with this statement. It’s taken me a while to think through exactly why this is. What I eventually concluded is that while it seems reasonable on the surface, buried in it is a wrong assumption.

It’s not that you should always prioritize the customer’s needs over business needs, or vice versa. Rather, the assumption underlying the statement – that these two things are at odds – is wrong. It’s a false dichotomy.

To believe that “balancing the user’s needs with business needs” makes sense, you need to be engaged in short-term thinking of one kind or another.

If you want your business to survive in the long term, there can be no distinction between the interests of your customer and those of your business.

Your business exists to serve a customer, in a sustainable way. In the final analysis (assuming a free market where your customers can leave), business needs and customer needs must be aligned. Promoting one at the expense of the other actually harms both.

In the long term, building a system that helps the business at the expense of your customers is actually harming both the business and your customers. (Spamming them with notifications in an attempt to boost engagement, for example).

Likewise, building a system that helps your customers at the expense of the business is actually harming both your customers and the business.

How does this second point make sense? I.e. how is that helping your customers at the expense of the business actually harms them?

Here’s how: presumably, your customers would rather your business continues to exist than not. For example, bribing customers with giveaways and subsidized prices isn’t sustainable. If you “spend 1 dollar to make 80 cents”, you will eventually go out of business.

When this happens, you will (at the very least) inconvenience your customers, leaving them bereft or forced against their wishes to switch to a competitor. Or if you offer something unique, you deprive them of that unique offering altogether.

Is it idealistic or wishful thinking to see the success of your customer and business as inextricably linked? Jeff Bezos, the CEO of Amazon, doesn’t seem to think so. The top 3 of his 4 pillars of Amazon’s success are:

  1. Customer Obsession
  2. Eagerness to Invent to Please the Customer
  3. Long-term Orientation

So next time you hear that the “needs of customer need to be balanced with the needs of the business” remember that to successful businesses, there is really no distinction.


Thanks to Xiao and Arun for their feedback and suggestions. Liked this article? Please consider sharing it with your friends and colleagues with one of the buttons below.

Beyond DRY – Why Redundancy Makes Your Code More Robust and Less Fragile

Anti-Fragile by Nassim Nicholas Taleb is a goldmine of practical ideas for software developers, despite it not being a software development book.

Redundancy is one example of such an idea that is explored. Taleb explains how having some redundancy reduces fragility, and means we don’t need to predict the future so well. Think of food stored in your basement, or cash under your mattress.

Taleb notes how nature’s designs frequently employ redundancy (“Nature likes to overinsure itself”):

“Layers of redundancy are the central risk management property of natural systems. We humans have two kidneys […] extra spare parts, and extra capacity in many, many things (say, lungs, neural system, arterial apparatus), while human design tends to be spare and inversely redundant, so to speak – we have a historical track record of engaging in debt, which is the opposite of redundancy”

Software source code is a good example of human design that tends to be “spare” (having no excess fat) and “inversely redundant”. Redundancy in code is traditionally avoided at all costs. In fact, one of the first principles that junior developers are often taught is the DRY principle – Don’t Repeat Yourself. As far as DRY is concerned, redundant code is a blight that should be eliminated wherever it shows up.

There are good reasons for the DRY principle. Duplicate code adds noise to the project, making it harder to understand without adding any obvious value. It makes the project harder to modify because the same code must be maintained separately at each place it is duplicated. Each of these locations is also another opportunity to introduce bugs. Duplicate code feels like waste.

However, as Taleb states:

“Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens – usually.” [emphasis added]

What are these “unusual things that usually happen” in software development? And how could duplicate code possibly help protect us against them?

The Wrong Abstraction

Firstly, remember that duplication is eliminated by introducing abstractions, such as a function or class. The problem with abstractions is that it is difficult to know ahead of time whether a chosen abstraction is actually a good fit for your project. And the cost of getting this wrong is high. Poorly-chosen abstractions add friction to making the kinds of changes that are actually needed for the project, while still exacting an ongoing cost in terms of complexity. There’s also the risk that by the time poor abstractions have been recognised as such, they have already spread throughout the project. Rooting them out at this point will likewise impact code all throughout the project, potentially with unintended consequences.

The “unusual things that usually happen” in software development are unexpected, unpredictable (and unavoidable) changes in business requirements. These have the annoying effect of revealing the shortcomings of your abstractions, abstractions that you perhaps added while faithfully following the DRY principle.

Too-eager abstraction and a lack of redundancy mirrors the problems of centralisation, another idea explored in Anti-Fragile. Centralisation, while efficient in the short-term (read: less code), makes systems fragile. When blow-ups happen, they can take down (or at least damage) the entire system. NNT outlines in Anti-Fragile how such fragility and lack of redundancy was the cause of the banking system collapse of 2008.

Redundancy in the form of duplicated code, on the other hand, makes code more robust. It does this by avoiding the worse evil of introducing the wrong abstraction. In this way, it limits the impact of unexpected changes in business requirements. As Sandi Metz states: “Duplication is far cheaper than the wrong abstraction”

The Rule of Three

As it turns out, there is another software development principle (or rule of thumb) which does recognise the risks of poor abstractions, and seeks to mitigate them through some redundancy. It’s called the “Rule of Three”. It states that you should wait until a piece of code appears three times before abstracting it out. (Note that this appears to contradict the DRY principle). This minimises the chances that the abstraction is premature, and increases the chances that it addresses a real, recurring feature of the problem domain that is worth the cost of abstraction.

Introducing an abstraction is in some sense a prediction of the future. Abstractions make a certain class of future changes easier, at the cost of some extra complexity and fragility. They are worth this cost if and only if the types of changes they make easier actually turn out to be reasonably common. Following The Rule of Three means deliberately holding off on making a prediction until more evidence has come in. The assumption built into the Rule of Three is that past changes are the best predictor of future changes.

Back to Nature

Now to return to Taleb’s observation of widespread redundancy in nature’s designs. An interesting implication of this is that despite all of the apparent “waste” involved, evolutionary processes have nonetheless converged onto it as the best strategy for dealing with unpredictability – a permanent feature of the real world (or at least, a better strategy than no redundancy – having one kidney, for instance).

At a high level, our software projects and teams are similar in the sense that they exist in a challenging, competitive environment punctuated by unpredictable changes. If meaningful parallels can be made between complex systems, it’s worth considering the possibility that despite the apparent “waste” involved, some redundancy is likewise the best strategy for dealing with the unpredictability in our environment too.

This is all to say: go forth and fearlessly copy-paste more code 🙂

References and Further Reading

The Wrong Abstraction

Write code that is easy to delete, not easy to extend

Antifragile: Things That Gain from Disorder (Incerto)

10 Tips for Exploring Foreign Cities

 

Ruins of St. Pauls, Macau

Last month I was fortunate enough to spend two weeks traveling around southern China including Hainan, Guangzhou, Macau and Hong Kong. It was an awesome trip; I would particularly recommend stopping by Hong Kong for a few days to check it out if you get the chance. It’s an amazing, vibrant city.

At some point during the trip I started noting down the things I was learning (about travel in general, and travel around cities in particular) into Evernote. Over time the list kept growing. What follows is an edited version of the original list, compiled into a top 10 (in typical web article fashion…)

1. Get the phone number of contacts in foreign country

If you’re meeting friends at the destination airport, make sure you have their mobile number. Just having them on Google Hangouts, WeChat, Facebook messenger or <insert online service here> won’t cut it as you can’t rely on WiFi access at airports. In Shanghai for example, you’ll still need a local mobile number to access the “Free” airport wifi.

Old fashioned and low-tech is sometimes best.

2. Double check that airports of connecting flights match

Cities can have more than one airport, and they may not be close together at all. As a New Zealander, this was surprising to learn…

3. Bring plenty of cash in the local currency

Unlike credit card and bank cards, cash is guaranteed to be accepted everywhere and is a lifesaver in emergencies.

Even if you’re going to a first-world country, don’t assume your card will be widely accepted, even at popular tourist attractions. For instance, you’ll need cash to buy a ticket for the Victoria Peak Tram in Hong Kong.

Another tip: divide your cash up and distribute it amongst your bags. That way if one goes missing, you still have backups. I had three stashes: one in my checked-in luggage, one in my backpack and a small amount in my wallet.

Again, low-tech = good.

4. Pack the night before

It’s easy to be unrealistic about how easy and fast it will be to “throw everything into your bag in the morning”. If checking out of your hotel room in the morning, do all possible packing the night before.

5. Invest in good walking shoes

When you’re out exploring all day every day, decent shoes will really pay dividends. Conversely, bad shoes and feet that are killing you each day can put a damper on your travel experience!

6. Sort out mobile data for your smartphone

Having internet access on your smartphone is absolutely essential when travelling, if only for Maps/GPS, Google Translate and being able to research other places to see while you’re already out.

With that in mind, set up global roaming with your mobile provider before you leave, or check if SIM cards are freely available at destination country. Some countries require you to be a local resident and/or have identification to get a SIM card (Hong Kong isn’t like this; China is).

Remember to pack the SIM card removal tool for your phone, if applicable.

If going to a country with restricted internet access, you may want to sort out VPN access beforehand so you can still access the online services you’re used to (Facebook, YouTube, etc). Record multiple fallback IP addresses for your VPN provider as it’s hard to know which will be blocked.

7. Always have snacks and water with you

Bring water and lots of snack foods such as energy bars and nuts in your day pack to keep up your energy levels throughout the day. You never know where you might end up while exploring; it might be a long time between proper meals.

8. Find out the off-peak hours of the tourist attractions you want to visit…

…and go then to avoid crowds. Crowds are pretty much guaranteed no matter the time of day at remotely popular attractions but you can avoid the worst of it with careful planning. Again, this was a bit of surprise to someone from a country as small as N.Z. where things are pretty much guaranteed to be quiet on weekdays and mornings.

9. Get a Metro map

This is a must if you’re checking out any city with a decent metro (e.g. Guangzhou, Shanghai and Hong Kong) due to the sheer amount of time you’ll spend using it. A paper map is better (no worries about dead batteries) or download a PDF online onto your phone or tablet.

10. Invest in or borrow a decent camera

As good as phone cameras are these days, there’s still no substitute for a standalone camera.

And finally (bonus tip 11), if I’ve learned one thing about travel so far it’s this: the big-name tourist attractions at any given destination can be pretty overrated. They’re often geared towards foreigners so much so that they shield you from the actual local culture. Some of the most enjoyable experiences I’ve had travelling have been while wandering around exploring, taking it all in and spontaneously discovering things. So don’t just tick all the boxes, get out there and experience the authentic whatever-place-it-is.

What percentage of your users use your app daily?

Both the Developer Console and Google Analytics can display your app’s active users the number of users that opened your app at least once on each day. Knowing the number of active users is a good start to getting an idea of user engagement, but the problem with looking at it in isolation is that it doesn’t give you any idea of how many of your users have your app installed and don’t open it at all each day.

What’s needed is a new metric with more context – the number of active daily users as a percentage of total users. This is a more accurate indicator of the actual value your app is offering your users, and can be used to validate that specific changes to your app are actually making it more useful or enjoyable (in Lean Startup terms, it is more a core metric and less of a vanity metric).

How to measure daily active users as a percentage for your Android app

You will need:

  • an Android app with Google Analytics and a reasonable amount of analytics data
  • Excel, LibreOffice Calc or an equivalent spreadsheet program for plotting graphs

Note: the sample screenshots I’ve included here use data from my recently released RadioDrive app.

  1. Go to Google Play Developer Console, select your app, go to Statistics.
  2. Select Current Installs by User (this accounts for users that have your app installed on more than one of their devices, unlike Current Installs by Device).
  3. Select 1 year for the time range so you get everything.
  4. Click Export to CSV. In the dialog make sure only the Users -> Current checkbox is selected.

Now we want to get hold of data for the number of active users. The Play Developer Console does have this statistic, but unfortunately you can’t currently export the data. Onward to Google Analytics…

  1. Login to Google Analytics, select “All Mobile App Data” for your app.
  2. Click Active Users from your App Overview page.

  1. Adjust the date range (drop-down box in the top-right corner) if necessary, then click Export > CSV

  1. The next step is to import and combine both datasets in Excel. Once you have copied both sets of data into the same spreadsheet, you’ll want to sort the Developer Console data by increasing date so it matches the Analytics data. To do this in Calc, box-select all rows for the date and current_user_install columns, then select Data -> Sort -> Sort by ascending date.

  1. Move active user data so the dates correspond, if necessary…

  1. Make a new column for percentage (Formula: =(C6/B6)*100). You can delete the Day Index column now as it’s redundant.

  1. Plot a line graph (date on X axis, percent on Y axis)

So far so good, we have a graph showing the percentage of active users each day.

But there’s a problem. Say you release an update for your app that is a total flop. Users start to uninstall your app in droves, except for a small segment of your dedicated fans. In this case, the percentage of active users may actually go up, as your botched update eliminates all but your most loyal users.

If you keep an eye on your other statistics such as daily uninstalls and number of active users (as well as monitoring actual user feedback), you would (hopefully) pick up this kind of scenario. However it’d be nice to be able to see this situation occurring in the same graph.

To do this, you can simply plot current user installs or number of active users on the same axes. That way, you’ll know something is up if either of them start trending downward.

Here I’ve plotted current user installs on a secondary Y axis:

The final graph (after adjusting the percentage scale to prevent overlap):

(In case you’re wondering, the lack of active user data until the 8th Dec is due to Google Analytics not being in the app until then!)

Extra credit: add a 3 or 5 day moving average trend line to % Active Users to smooth out fluctuations (having a larger sample size helps with this also).

What core metrics do you measure for your app and what tools do you use to measure them?

Google I/O 2013 – Cognitive Science and Design, and how it applies to Android apps

This is an excellent talk by Alex Faaborg at Google I/O 2013 about cognitive science principles and how they apply to interface design. Here’s a summary of some of the main points and how they could be used to improve your apps:

  • We can search for objects of the same colour much faster than searching for objects of the same shape [18:26]
  • We can scan a group of faces for one we recognise in parallel rather than sequentially. This could be taken advantage of in messaging and address book apps, for example [10:13]
  • Objects in our periphery are recognised much faster than in our frontal field (tiger example in the video). You can put a small notification icon in the corner of the screen away from the user’s focal point and it will still be noticed [6:50]
  • Colour-deficiency: you can get away with using green and red as long as the contrast is significantly different. Best approach is to test your interface with filtering tools to see how it would actually look (e.g. Photoshop) [13:50]
  • Our brains are very good at recognising patterns. It’s not necessary to group objects together in a box, just having whitespace between groups will do [3:24]
  • You’ll recognise a silhouette of an object that just shows its basic geometry faster than you will recognise a more photo-realistic depiction of the object. This principle is used in the Holo icon set [9:10]
  • Notifications/interruptions wipe the contents of our working memory and make us lose the state of “creative flow” if we were in it. Takeaway: use notifications carefully [22:22]
  • “Chunking” optimizes for our working memory. Examples are the groups of digits in credit card and phone numbers. Make sure your interface supports these chunks and ignores user-entered whitespace! [21:17]
  • We make trust decisions quickly and once made they are slow to change, even to the point of us explaining away new information that goes against them. First impressions matter – make sure you have a quality application icon [24:16]
  • You don’t *have* to be consistent with existing interfaces and interaction paradigms when designing your app. Combining innovation with teaching the user (e.g. with a quick example video) can work well. Example: collaborating on documents via email attachments vs. using Google Docs [31:21]

Android: 9 patching a family of images the easy way

9 patch images in Android are great but if you happen to have a family of graphics to convert, it can get pretty tedious. I had a collection of button graphics that needed converting to 9 patches using the same stretchable regions.

Rather than do it all by hand with Photoshop or GIMP (and inevitably need to redo them all again later when something needed changing) I wrote a small BASH script to do it.

To use the script, first use the draw9patch tool to create the 9 patch info for one of your graphics – this will become the template. Once you’re done, go:

[code language=”bash” light=”true”]
./9batch.sh template.9.png button2.png button3.png …
[/code]

to copy the 1 pixel border from the template to your remaining graphics and save a .9.png version of each of them.

Note that you’ll need to install ImageMagick to use the 9batch script:

[code language=”bash” light=”true”]
sudo apt-get install imagemagick
[/code]

Apparently WordPress won’t let me upload the script itself so here’s the source code:

[code language=”bash” light=”true” collapse=”true”]
#!/bin/bash

if [ "$#" -lt 2 ]; then
echo "Usage: 9batch.sh template image1 image2 …" >&2
echo
echo "Applies 9 patch info to a family of images using one image as the template" >&2
echo "Template image should be 2 pixels wider and higher than source images" >&2
exit 1
fi

# 9 patch image to use as template
src=$1

for i in ${@:2}
do
# use sed to change extension from .png to .9.png and assign result to ‘out’
out=`echo $i | sed -e ‘s:\(….\)$:.9\1:’`
composite -gravity center $i $src $out
done
[/code]

Android Device Nudge Detection Helper Class

I recently added a feature to StarCraft 2 Build Player to start playing build orders when the users’ phone is nudged. The idea is so you don’t have to waste precious seconds looking down at your phone to tap the “Play” button, instead you can just mindlessly bump your phone on your desk and you’re off.

Anyway, it turned out to be pretty easy to factor this into a reusable class so here it is:

[sourcecode language=”java”]
package com.kiwiandroiddev.sc2buildassistant;

import java.util.ArrayList;

import android.content.Context;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Handler;

/**
* Class for reporting when the device’s acceleration (excluding gravity) exceeds
* a certain value. Compatible with all Android versions as it uses Sensor.TYPE_ACCELEROMETER
* rather than Sensor.TYPE_LINEAR_ACCELERATION.
*
* NudgeDetector objects are initially disabled. To use, implement
* the NudgeDetectorEventListener interface in your class, then register it
* to a new NudgeDetector object with registerListener(). Finally, call
* setEnabled(true) to start detecting device movement. You should add a call
* to stopDetection() in your Activity’s onPause() method to conserve battery
* life.
*
* @author kiwiandroiddev
*
*/
public class NudgeDetector implements SensorEventListener {

private ArrayList<NudgeDetectorEventListener> mListeners;
private Context mContext;
private SensorManager mSensorManager;
private Sensor mAccelerometer;
private boolean mEnabled = false;
private boolean mCurrentlyDetecting = false;
private boolean mCurrentlyChecking = false;
private int mGraceTime = 1000; // milliseconds
private int mSampleRate = SensorManager.SENSOR_DELAY_GAME;
private double mDetectionThreshold = 0.5f; // ms^-2
private float[] mGravity = new float[] { 0.0f, 0.0f, 0.0f };
private float[] mLinearAcceleration = new float[] { 0.0f, 0.0f, 0.0f };

/**
* Client activities should implement this interface and register themselves using
* registerListener() to be alerted when a nudge has been detected
*/
public interface NudgeDetectorEventListener {
public void onNudgeDetected();
}

public NudgeDetector(Context context) {
mContext = context;
mListeners = new ArrayList<NudgeDetectorEventListener>();
mSensorManager = (SensorManager) mContext.getSystemService(Context.SENSOR_SERVICE);
mAccelerometer = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
}

// Accessors follow

public void registerListener(NudgeDetectorEventListener newListener) {
mListeners.add(newListener);
}

public void removeListeners() {
mListeners.clear();
}

public void setEnabled(boolean enabled) {
if (!mEnabled && enabled) {
startDetection();
} else if (mEnabled && !enabled) {
stopDetection();
}
mEnabled = enabled;
}

public boolean isEnabled() {
return mEnabled;
}

/**
* Returns whether this detector is currently registered with the sensor manager
* and is receiving accelerometer readings from the device.
*/
public boolean isCurrentlyDetecting() {
return mCurrentlyDetecting;
}

/**
* Sets the the amount of acceleration needed to trigger a "nudge".
* Units are metres per second per second (ms^-2)
*/
public void setDetectionThreshold(double threshold) {
mDetectionThreshold = threshold;
}

public double getDetectionThreshold() {
return mDetectionThreshold;
}

/**
* Sets the minimum amount of time between when startDetection() is called
* and nudges are actually detected. This should be non-zero to avoid
* false positives straight after enabling detection (e.g. at least 500ms)
*
* @param milliseconds_delay
*/
public void setGraceTime(int milliseconds_delay) {
mGraceTime = milliseconds_delay;
}

public int getGraceTime() {
return mGraceTime;
}

/**
* Sets how often accelerometer readings are received. Affects the accuracy of
* nudge detection. A new sample rate won’t take effect until stopDetection()
* then startDetection() is called.
*
* @param rate must be one of SensorManager.SENSOR_DELAY_UI,
* SensorManager.SENSOR_DELAY_NORMAL, SensorManager.SENSOR_DELAY_GAME,
* SensorManager.SENSOR_DELAY_FASTEST
*/
public void setSampleRate(int rate) {
mSampleRate = rate;
}

public int getSampleRate() {
return mSampleRate;
}

/**
* Starts listening for device movement
* after an initial delay specified by grace time attribute –
* change this using setGraceTime().
* Client Activities might want to call this in their onResume() method.
*
* The actual sensor code uses a moving average to remove the
* gravity component from acceleration. This is why readings
* are collected and not checked during the grace time
*/
public void startDetection() {
if (mEnabled && !mCurrentlyDetecting) {
mCurrentlyDetecting = true;
mSensorManager.registerListener(this, mAccelerometer, mSampleRate);

Handler myHandler = new Handler();
myHandler.postDelayed(new Runnable() {
@Override
public void run() {
if (mEnabled && mCurrentlyDetecting) {
mCurrentlyChecking = true;
}
}
}, mGraceTime);
}
}

/**
* Deregisters accelerometer sensor from the sensor manager.
* Does nothing if nudge detector is currently disabled.
* Client Activities should call this in their onPause() method.
*/
public void stopDetection() {
if (mEnabled && mCurrentlyDetecting) {
mSensorManager.unregisterListener(this);
mCurrentlyDetecting = false;
mCurrentlyChecking = false;
}
}

// SensorEventListener callbacks follow

@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}

@Override
public void onSensorChanged(SensorEvent event) {
// alpha is calculated as t / (t + dT)
// with t, the low-pass filter’s time-constant
// and dT, the event delivery rate

final float alpha = 0.8f;

mGravity[0] = alpha * mGravity[0] + (1 – alpha) * event.values[0];
mGravity[1] = alpha * mGravity[1] + (1 – alpha) * event.values[1];
mGravity[2] = alpha * mGravity[2] + (1 – alpha) * event.values[2];

mLinearAcceleration[0] = event.values[0] – mGravity[0];
mLinearAcceleration[1] = event.values[1] – mGravity[1];
mLinearAcceleration[2] = event.values[2] – mGravity[2];

// find length of linear acceleration vector
double scalarAcceleration = mLinearAcceleration[0] * mLinearAcceleration[0]
+ mLinearAcceleration[1] * mLinearAcceleration[1]
+ mLinearAcceleration[2] * mLinearAcceleration[2];
scalarAcceleration = Math.sqrt(scalarAcceleration);

if (mCurrentlyChecking && scalarAcceleration >= mDetectionThreshold) {
for (NudgeDetectorEventListener listener : mListeners)
listener.onNudgeDetected();
}
}
}

[/sourcecode]

The reason I stuck to using Sensor.TYPE_ACCELEROMETER was because I want to support Froyo with my app. If you’re only targeting 2.3 (API level 9) and higher, you could use Sensor.TYPE_LINEAR_ACCELERATION, and simplify this code a fair bit by stripping out the gravity calculation in onSensorChanged(), etc.

Feel free to use this in your projects. Drop me a comment if you spot bugs or have any suggestions.

Data on Android device supported features

I’ve recently been experimenting with OpenGL ES 2.0 on Android for a graphical app (some excellent guides can be found at http://www.learnopengles.com/). So far so good. It turns out that gone are the days of countless fixed function calls like glBegin() glVertex3f() glColor4f() for sending vertex data, nowadays you use shaders for everything and send your vertex data to OpenGL in large chunks.  Supposedly this makes the graphics driver software a lot simpler to write and leads to better performance overall. Keeping track of all of those calls and their corresponding closing calls could end up a bit of a headache so it seems like it provides some benefit to application developers too.

Before diving in and using ES 2.0 exclusively (well, at first anyway – code for ES 1.x support can always be added later) I wanted to get an idea of how widely ES 2.0 is supported across Android devices because it could have a big effect on the market size for my app.

After filtering through some anecdotal evidence on Stackoverflow, not surprisingly the best place to find this data was straight from the horse’s mouth at the Android Dashboards page.

According to the data, ES 2.0 support is over 90% and it seems reasonable to assume it’s only going to increase in time. So that settles it – OpenGL ES 2.0 it is.

The Dashboards page also has data on the installation base for each Android version which may also be very useful to you during the research phase of developing your app.