Android Overclocking, Governors and IO Schedulers??

What does all this mumbo jumbo mean. Do I need it? What can it do? What can't it do? Are all valid questions that most people dabbling in the Android smartphone world have eventually.

Let's first start with the fact that you need to be ROOTED to be able to change any of those properties. This possibly eliminates half of the Android users out there.

This isn't a tutorial, this is just to explain to those who are asking the questions about it because they are rooted, they see the options but aren't sure what is what.

I personally don't overclock my devices and that's just because now we have quad core smartphones or tablets that are able to deliver. You always runt he risk of damaging your device if you do. Granted most chip have built in power off when it reaches dangerous levels, but that doesn't mean you should just go ahead and do it.

I'll focus mostly on what can actually increase the performance of your device. There's no guarantee that it will not adversely affect it.

Governors and IO Schedulers what are they? A short and simple answer; they affect how your processor (CPU), video processing (GPU) and your flash memory access. All 3 components are intertwined and work together.

If your interested to have more in depth information you can refer yourselves to the CPUfreq Governors guide at kernel.org

I will focus on the main Governors and IO Schedulers offered on Android and I know this might not cover all the different ones.

Since a lot of the information has been provided already I will take extracts from http://rootzwiki.com/topic/40336-cpu-governors-explained/ and in a simple form.

Governors

1: OnDemand Governor:

This governor has a hair trigger for boosting clockspeed to the maximum speed set by the user. If the CPU load placed by the user abates, the OnDemand governor will slowly step back down through the kernel's frequency steppings until it settles at the lowest possible frequency, or the user executes another task to demand a ramp.

OnDemand has excellent interface fluidity because of its high-frequency bias, but it can also have a relatively negative effect on battery life versus other governors. OnDemand is commonly chosen by smartphone manufacturers because it is well-tested, reliable, and virtually guarantees the smoothest possible performance for the phone. Because users are vastly more likely to complain about performance than they are the few hours of extra battery life another governor could have granted them. But then again you'll get people who will complain about both.

OnDemand scales its clockspeed in a work queue context. In other words, once the task that triggered the clockspeed ramp is finished, OnDemand will attempt to move the clockspeed back to minimum. If the user executes another task that triggers OnDemand's ramp, the clockspeed will bounce from minimum to maximum. This can happen especially frequently if the user is multi-tasking. This, too, has negative implications for battery life.

2: Performance Governor:

This locks the phone's CPU at maximum frequency. While this may sound like an ugly idea, there is growing evidence to suggest that running a phone at its maximum frequency at all times will allow a faster race-to-idle. Race-to-idle is the process by which a phone completes a given task, such as syncing email, and returns the CPU to the extremely efficient low-power state. This still requires extensive testing, and a kernel that properly implements a given CPU's C-states (low power states). So this could technically mean better battery life.

3: Powersave Governor:

The opposite of the Performance governor, the Powersave governor locks the CPU frequency at the lowest frequency set by the user. Prepare for slow performance and slow reaction time.

4:Conservative Governor:

This biases the phone to prefer the lowest possible clockspeed as often as possible. In other words, a larger and more persistent load must be placed on the CPU before the conservative governor will be prompted to raise the CPU clockspeed. Depending on how the developer has implemented this governor, and the minimum clockspeed chosen by the user, the conservative governor can introduce choppy performance. On the other hand, it can be good for battery life.

The Conservative Governor is also frequently described as a "slow OnDemand," if that helps to give you a more complete picture of its functionality.

5: Interactive Governor:

Much like the OnDemand governor, the Interactive governor dynamically scales CPU clockspeed in response to the workload placed on the CPU by the user. This is where the similarities end. Interactive is significantly more responsive than OnDemand, because it's faster at scaling to maximum frequency.

Unlike OnDemand, which you'll recall scales clockspeed in the context of a work queue, Interactive scales the clockspeed over the course of a timer set arbitrarily by the kernel developer. In other words, if an application demands a ramp to maximum clockspeed (by placing 100% load on the CPU), a user can execute another task before the governor starts reducing CPU frequency. This can eliminate the frequency bouncing discussed in the OnDemand section. Because of this timer, Interactive is also better prepared to utilize intermediate clockspeeds that fall between the minimum and maximum CPU frequencies. This is another pro-battery life benefit of Interactive.

However, because Interactive is permitted to spend more time at maximum frequency than OnDemand (for device performance reasons), the battery-saving benefits discussed above are effectively negated. Long story short, Interactive offers better performance than OnDemand (some say the best performance of any governor) and negligibly different battery life.

Interactive also makes the assumption that a user turning the screen on will shortly be followed by the user interacting with some application on their device. Because of this, screen on triggers a ramp to maximum clockspeed, followed by the timer behaviour described above.

There you have it for governors. These are the most often used ones. Now depending the kernel you may have on your phone or the one you've flashed you may have additional options.

I/O Schedulers

Anticipatory
Based on two facts:

Disk seeks are really slow.
Write operations can happen whenever, but there is always some process waiting for read operation.

So anticipatory prioritize read operations over write. It anticipates synchronous read operations.

Advantages
Read requests from processes are never starved.
As good as noop for read-performance on flash drives.

Disadvantages
‘Guess works’ might not be always reliable.
Reduced write-performance on high performance disks.

BFQ

Instead of time slices allocation by CFQ, BFQ assigns budgets. Disk is granted to an active process until it’s budget (number of sectors) expires. BFQ assigns high budgets to non-read tasks. Budget assigned to a process varies over time as a function of it’s behavior.

Advantages
Believed to be very good for usb data transfer rate.
Believed to be the best scheduler for HD video recording and video streaming. (because of less jitter as compared to CFQ and others)
Considered an accurate i/o scheduler.
Achieves about 30% more throughput than CFQ on most workloads.

Disadvantages
Not the best scheduler for benchmarking.
Higher budget assigned to a process can affect interactivity and increased latency.

CFQ

Completely Fair Queuing scheduler maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. Each per-process queue contains synchronous requests from processes. Time slice allocated for each queue depends on the priority of the ‘parent’ process. V2 of CFQ has some fixes which solves process’ i/o starvation and some small backward seeks in the hope of improving responsiveness.

Advantages
Considered to deliver a balanced i/o performance.
Easiest to tune.
Excels on multiprocessor systems.
Best database system performance after deadline.

Disadvantages
Some users report media scanning takes longest to complete using CFQ. *This could be because of the property that since the bandwidth is equally distributed to all i/o operations during boot-up, media scanning is not given any special priority.
Jitter (worst-case-delay) exhibited can sometimes be high, because of the number of tasks competing for the disk.

Deadline

Goal is to minimize I/O latency or starvation of a request. The same is achieved by round robin policy to be fair among multiple I/O requests. Five queues are aggressively used to reorder incoming requests.

Advantages
Nearly a real time scheduler.
Excels in reducing latency of any given single I/O.
Best scheduler for database access and queries.
Bandwidth requirement of a process – what percentage of CPU it needs, is easily calculated.
Like noop, a good scheduler for solid state/flash drives.

Disadvantages
When system is overloaded, set of processes that may miss deadline is largely unpredictable.

Noop

Inserts all the incoming I/O requests to a First In First Out queue and implements request merging. Best used with storage devices that does not depend on mechanical movement to access data (yes, like our flash drives). Advantage here is that flash drives does not require reordering of multiple I/O requests unlike in normal hard drives.

Advantages
Serves I/O requests with least number of cpu cycles. (Battery friendly?)
Best for flash drives since there is no seeking penalty.
Good throughput on db systems.

Disadvantages
Reduction in number of cpu cycles used is proportional to drop in performance.

SIO

Simple I/O scheduler aims to keep minimum overhead to achieve low latency to serve I/O requests. No priority queues concepts, but only basic merging. Sio is a mix between noop & deadline. No reordering or sorting of requests.

Advantages
Simple, so reliable.
Minimized starvation of requests.

Disadvantages
Slow random-read speeds on flash drives, compared to other schedulers.
Sequential-read speeds on flash drives also not so good.

V(R)

Unlike other schedulers, synchronous and asynchronous requests are not treated separately, instead a deadline is imposed for fairness. The next request to be served is based on it’s distance from last request.

Advantages
May be best for benchmarking because at the peak of it’s ‘form’ VR performs best.

Disadvantages
Performance fluctuation results in below-average performance at times.
Least reliable/most unstable.

In the end of you have questions feel free to reach out to me. I'll be more then happy to help you out.

Comments