Saturday, February 19, 2011

Use of ICTs in Uganda's Elections

The 2011 Uganda Elections have been characterized by mass use of Information Technologies (IT). This started months before elections, with the Electoral commission Publishing Voter Registers online . This enabled Voters to Check for their Designated Polling Stations and Verify their personal and Polling Station Details online and through Short Messaging Services (SMS).

Then there was the months running up to elections when Crowd Sourcing Platforms like http://www.ugandawatch2011.org run by the Democracy Monitoring Group (DEMGroup) and the Ushahidi Powered http://www.uchaguzi.co.ug  being run by Citizens Election Watch-Information Technology (CEW-IT) were setup. These Platforms do aggregation of real-time information sent by Citizens; a concept known as Crowd-sourcing.

Crowd-sourcing
For those not familiar with Crowd-sourcing, think of how you can be in Kampala and wish to know what is happening in Gulu. Usually you will have to wait for the next TV or Radio bulletin by which time the situation can be different or a script edited. Now, What if someone in Gulu were able to send an SMS which could be plotted on a map that is accessible online....that would be cool...right? Now think of one person doing the same from every District. How much information could that would be? With a glance at a map you would be able to keep abreast with activities in disparate locations in Near Real-time - this is what we call Crowd-sourcing.


Map from Uganda Watch
One of the most widely used Crowd sourcing Platforms internationally, is Ushahidi. This Software Platform was Developed in Kenya and was used during the Kenyan Referendum, The Tanzania Elections last year, For coordination of relief Aid in Haiti and in the Election in India. This is the same platform powering Uchaguzi and Parts of Uganda Watch (Screen shot)


Moving on
Then there was election day when all the media houses were on a frenzy to feed their Twitter and Facebook followers on realtime information. From images of senior Citizens finding alternative uses for Sauce Pans to tallying results from every corner, the information flow was great and in most cases corroborated by other sources lending credence to the practice.

With only one or two incidents of misreporting, the overall flow of information was overwhelming. The press and Social Network Savvy Citizens took it upon themselves to feed the most up todate information through all the online channels at their discretion.

I cannot speak for Democracy and fairness of Elections as that is outside the purview of this Blog, but I believe harnessing the power of ICTs brought about steady streams of information which can be difficult to alter without leaving a trace given the existence of a digital footprint. I am sure more adoption of ICT will help in making the Country more Transparent and Accountable in the future. Look forward to this.

Friday, February 18, 2011

IPv6 – What you need to know - Part 2

In Part 2 of our series we look at the advantages that IPv6 presents over IPv4. We also see how Internet Addresses are Managed Globally.

In Part 1 we discussed the rumored doomsday of the Internet (IPocalypse). The day when the Global Address Pool from which addresses have been dished out for years finally runs out. This day happens to be 3rd February and I thought it unfair to go without mention. Emergency measures by the IANA (covered later in this article) have already kicked in and as I pointed out in Part 1, nothing has and will break on the Internet.
Albeit, I must say that now more than ever deployment and adoption of IPv6 must be given the highest priority.

Now, moving on, we examine the advantages that IPv6 presents over IPv4:-
More Address Space
The most touted reason for IPv6 as a replacement for IPv4 is the large number of Addresses. While IPv4 avails about 4 Billion Addresses (232 ), IPv6, allows for 340,282,366,920,938,463,463,374,607,431,768,211,456. (2128 ) or simply – 340 trillion, trillion, trillion addresses. Far more than the number of stars in the galaxy!
The less spoken about reasons for the new Addressing standard are:-

Ease of Configuration
IPv6 comes with Plug-and-Play capabilities – meaning if enabled, Ipv6 can work on a network without much configuration.
Security
Since the Internet was a research project running off an isolated network, not much thought was given to security. The security mechanisms in IPv4 were retrofitted after the Internet started getting wider adoption and problems.

The security mechanisms that work with the Internet Protocol such as SSL or Secure Sockets Layer, IPSEC (Internet Protocol Security) to name two are optional in IPv4. Mandatory and coming as default in IPv6, these measures provide a certain level of security even without you making a conscious effort to be secure.

Mobility
If you have used an IP phone in a wireless environment before you know that this can be problematic. IPv4 lacks the mechanism to transfer Phone sessions from one Access Point (AP) to another. With the requisite Network Infrastructure, Version 6 is able to facilitate the move from one AP to another giving you the same pleasant experience you get from a Cellphone.

Scalability
When you send an email from Uganda, to say China, the Internet uses a routing table to calculate and store possible paths to your desired destination. The exponential growth of the Internet has meant growth of this table resulting into degraded network performance as Routers have to sift through hundreds of thousands of routes. Compared to IPv4, IPv6 mitigates this problem by allowing more similar routes to be grouped as one (Route Aggregation). This results into a performance boost for the internet infrastructure as a router takes a shorter time and less resource to make a routing decision.


Efficiency and Reliability
The last merit we cover here is the efficiency of Ipv6. IPv6 tremendously reduces the administrative load on a Network Administrator. Even with seemingly longer and stranger looking Addresses it is actually more efficient as each packet comes with sufficient information to help it get from source to destination without much consultation along the way.

I believe you can now agree that IPv6 is a well thought out Solution to the Addressing Problem. Moving on to the next part of this Article.

Global Address Management
Even with the fact that it is freely available, Information on IP Address Management still seems to elude many. I hope through this brief I can shade light on some of the questions I hear a lot on the subject of Ownership and Management of IP addresses. Brace yourself for herein acronyms and bureaucracy abound.

The Internet Corporation For Assigned Names and Numbers (ICANN), a non-profit governs the Internet at the highest level and operates through independent organs one of which is the Internet Assigned Numbers Authority (IANA).
The IANA, oversees global allocation and management of IP addresses, and autonomous system numbers among others.

The IANA, holds the central pool of IP Addresses and allocates them to Regional Internet Registries or RIRs on a needs basis

There are five Regional Internet Registries (RIRs) in the world serving different regions:-
  • African Network Information Centre (AfriNIC) for Africa and Parts of the Indian Ocean.
  • American Registry for Internet Numbers (ARIN) for the United States, Canada, and several parts of the Caribbean region.
  • Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and neighboring countries
  • Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and parts of the Caribbean region

    • RIPE NCC for Europe, the Middle East, and Central Asia



Regional Internet Registries (RIRs)
First established in the 1990's, RIRs were setup by the communities to satisfy emerging technical and admin needs in the area of resource Administration.

RIRs work on the following principles:-
  • Allocation and registration of IP addresses and related “Internet resources”
  • Open policy process
  • Fair Distribution of Internet Resources
  • Technical services, training and education…
  • No involvement in DNS registration!

RIR's allocate address space to Network Operators like ISP's who in turn assign these to their customers. These Addresses are distributed through policies that are developed by the community which include Network Operators, End Users, etc. Each one of the 5 Registries works in their area of jurisdiction and co-operates with other RIRs when called on.

To understand how Addresses are assigned, lets take an example – Your organization needs Publicly routable addresses. You have the option of contacting your Internet Service Provider directly. In this case, they assign you addresses they received from the RIR (AfriNIC - in our case) or you can go directly to the AfriNIC and acquire what is known as Provider Independent (PI) Addresses. PI Addresses can be maintained even after you switch service Provider.

Whether it is version 4 or version 6, There is a criteria used for allocating Publicly routable address space. The AfriNIc website (http://www.afrinic.net/policy.htm) is a good place to start if you plan on acquiring Public addresses

In our next issue we will wrap up this series by seeing what needs to be done for us to transition smoothly to IPv6 and who should take action.


Article First Published by Enterprise Technology - ictcreatives.com

Thursday, February 3, 2011

IPv6 - What you need to know


As time goes by, more and more buzz is being generated on the subject of IPv4 exhaustion and The Transition to IPv6. The problem is that many people still don't understand what this means, how they will be affected and how they should respond to this imminent situation.

I am hoping that this three part series on the subject will elucidate on the subject and give some important insight on the issues that continue to elude people. So in Part one, let's start from the beginning.

The Internet.
The Internet in its simplest form can be defined as a system of Interconnected Networks. This system was invented as a research project mainly backed by the US Military and went on to become a communications medium for geeks in a few Hi-Tech research facilities where the creators and a few learned colleagues punched away lines of commands just to read an email.

Since the technology involved communication or interaction, identification of communicating nodes or computers in this case was necessary; hence the adoption of the Internet Protocol (IP) as the preferred addressing scheme. We won't go into the details, but suffice it to say there were other competing Addressing schemes at the time, but the Internet Protocol (Fourth version) gained the most traction.

Internet Protocol (IP) Address
An Internet Protocol (IP) Address, is a unique number that identifies each host (Computer, Server, Smartphone) on a network – or shall we say the internet. For a host to be uniquely identified on the Internet it must have atleast one IP Address. “212.238.0.1” and “2001:db8:0:1234:0:567:8:1” are examples of IPv4 and IPv6 addresses respectively.

The Problem
Like I mentioned earlier, the Internet was nothing more than a research project turned communication tool. The curators of this ubiquitous technology didn't exactly envisage their creation helping a housewife find a stake recipe or even help people with bad taste in music watch Justin Bieber on Youtube – Ok, not the best example, but you get the point.

The addressing scheme (IPv4), only allowed for about 4 billion unique IP addresses and like Bill Gates' prediction on Computer Memory in 1981, Vint Cerf and his colleagues thought the 4 billion addresses as sufficient at the time – This prediction has turned inaccurate Thanks to the Dot-com Bubble in the early 2000's that put computers into ordinary households and the rich content available on the internet which is over a billion users strong. Several mechanisms have been devised in the last decade to cater for this short coming in the Version 4 of the Internet Protocol Addressing scheme. We will go over some of these shortly, but first, The impending doom of the internet...or the reports of one.

IPocalypse
There has been lots of reports on the impending doom of the internet. Some have even made allusions to an IPocalypse – (Apocalypse of the Internet Protocol). I would like to state that this is erroneous and misleading. There won't be a crash of the internet. Even after the current pool of internet resources run out, the internet will continue to exist thanks to it's design, early adopters of the newer version (6) of the Internet Protocol and the major websites that already run services on this Protocol. Websites like Google, Yahoo, Youtube, CNN and AfriNIC can now be reached on IPv6.

A common question is whether IPv6 was the most ideal solution to the problem of address exhaustion – let's take a look at some of the other remedies that have gained wide adoption and why they fall short of being a panacea to the problem of address exhaustion.

NAT
Network Address Translation or NAT allows Network Operators to allocate private addresses to End-users and requires only one or a few globally reachable address for a potentially large group of customers. Off course this means the End users have to use the gateway for traffic to the Internet. The problem with this that it:- 1) Breaks the end-to-end model of the Internet Protocol and the Internet itself. 2) Mandates that the network keeps the state of the connections 3) Makes fast rerouting difficult as traffic has to go out through the node that is facing the global internet at all times. 4) Because of its nature NAT, breaks the End-to-end security model 5) also some applications are not NAT friendly this can cause problems sometimes.

This is why NAT comes of as not a very good solution to the exhaustion problem.


CIDR
Classless Inter-Domain Routing or CIDR, employs aggregation strategies to minimize the size of the Internet’s routing table.. CIDR allows routers to group routes together in order to cut down on the quantity of routing information carried by the core routers. With CIDR, several IP networks appear to networks outside the group as a single, larger entity.

CIDR is perhaps the most widely used method but because the internet is growing constantly, It just can't keep up with the exhaustion of a finite resource.

DHCP
Dynamic Host Configuration Protocol (DHCP). Is the protocol used to assign addresses to hosts in a network automatically. DHCP is used to avoid the administrative burden of assigning static addresses to each device on a network. It also allows multiple devices to share limited address space on a network if only some of them should be online at a particular time.

The Problem is that Nodes that communicate over the internet increasingly have the need for an always-on connection state and DHCP simply doesn't offer this. This makes it a less than ideal solution.

In Part 2, we shall talk about what makes IPV6 a more ideal solution to the current problem of exhaustion, How it makes up for some of the short comings in the intermediary solutions we have discussed above and the major differences between it and Ipv4. We shall also look at how IP addresses are managed Globally.

Why Your Website is being attacked and what you can do to Prevent it

A couple of years back when we attended Internet Governance meetings, all we did was discuss access and connectivity. Security and Privacy issues that the early adopting nations grappled with, we only mentioned in passing. This is fast changing thanks to the efforts of groups like YOGYACARDERLINK, REALQW, and C4UR who have made it their business to wake up East Africa with their relentless hacking attempts.

Problem
Employing various methods, these groups have and continue to fell targets at alarming rates; The targets include Government, NGO, Businesses. Name it, they have all got a pinch on the ear from this very unequivocal teachers who occasionally leaves messages like “Where is your security!” on hacked sites.

Now, whereas the Target selection seems random, the success rate of these attacks are astounding. This begs the question, what is common about these target and why are these miscreants succeeding on this very malevolent quest?

Background
To answer this question, a few point may be helpful:- 1. Most of the targets felled are websites, 2. Nearly all the hacked sites use CMS's – particularly Joomla and 3. Nearly 90% of the time a SQL injection is used with success.

Solution
Ok, now with few unknowns, Let see how we can protect ourselves against some of the most common forms of attack.

Security Framework
The first and most important aspect of online security, is a security framework. This is a blueprint and without it, website developers and Admins will be unable to develop, or maintain secure web applications. This Document will usually has access levels, File permissions among other Best security practices. It is critical that a corporation involved in any sort of development on the web embed this into every single undertaking. Incidences at Facebook and Twitter are a living testimony of what can happen if Security measures are not adopted earlier in the development life cycle.

Update Web apps
Content Management Systems (CMS'), have greatly improved the speed and manner in which we design, build and deploy websites and other web applications. Because of this, businesses have shifted their focus to rapid deployment and getting as much info out as possible. The unintended consequence is that security is generally overlooked – Fortunately most of the commonly used CMS' :- Joomla, Drupal, Wordpress to name three, allow for automatic updating of modules or extensions. If you use a CMS' be sure to enable updates so that modules with flaws are fixed immediately a vulnerability fix is found. This can drastically lower your attack surface.

Database Prefix and Version Numbers
Many Sql injection tools are written to exploit CMS's in their original form. Take Joomla for instance:- it's nomenclature hass a Database prefix of (jos_). A change in the Database prefix alone will make any SQL exploits on your Database fail most of the time.

And speaking of original form and Joomla, Extensions have vulnerabilities in particular versions and hackers usually abandon a target if reconnaissance gives unreliable information or none at all. By removing the version numbers from your Extensions, you lower your chances of being attacked in the wild to nearly zero. You also create far more work for the targeted attacks given how much gambling the attacker has to do.

Sanitize User input
A SQL injection occurs when a site is unable to preserve it's query structure given certain forms of input (usually malicious). The Web application executes a query that otherwise shouldn't have processed resulting into malicious. Sanitization includes excepting URL's parameters from being executed, Blocking operations that write, delete from the DB.

Rewrite URL's
With Google Hacking, a search term like “inurl:com_contact” can be used to find vulnerable hosts on the internet. If your url's are in their original form they could expose you to real threats. The Good news for you is that most CMS' today have modules to rewrite your url from something like “test.ug/index.php?option=com_content” to “test.ug/index.php/sponsors.html”. The later is easier to read, index for search engines and obfuscates the Web applications components offering you security in the process.

Permissions
During Installation and updating of CMS's, it is common for the modules to write to certain files and directories. It is also common place especially for the less adept Web Admin to allow more permission than is required in an attempt to make administration easier. This creates the potential for upload and execution of files should an attacked be mounted on you. Always allow just sufficient permission to modules, anything over and above can be misused. Also ensure you downgrade these after installation if your modules really require a privilege escalation.

As a standard:- Your PHP files (.php extension), should be set with a mask of 644, Configuration files (.conf) should have the mask set at 666 while other files should generally maintain the 755. Use of the .htaccess file in most webserver environments – especially Apache will allow you make directory level changes and keep the changes uniform across the board.


Changing Default Passwords
During installation CMS' will setup a default password. You must ensure you change these as these trivial Passwords, like “Password”, “Admin” are known by anyone who has ever done an installation. Let alone the hackers. You open up your website to the possibility of a complete take over by leaving your passwords at the default.
Choosing a Long, Hard to guess password that has a combination of Alpha-Numeric as well as special characters will go along way preventing you from getting hacked.


Testing Web Apps
Even with the best security Framework and practices, it is possible to omit certain parts of you applications security. The solutions is vulnerability testing. This can cater for our human flaws.

There are myriads of tools available today - both free and premium with some being complex to install or even requiring Linux to run, while other are as easy as a Firefox Addon. At the end of the day it depends on what you want to achieve.

An example a Firefox addon is “SQL Inject Me”. This will crawl your Webpage and test Form fields and other things for SQL inject vulnerabilities and present you a report at the end of the scan which usually lasts only a couple of minutes.

You have all the information you need. I hope you can prevent attacks on your Web Applications by putting them to use.