GSoC 2013: Week 2 of Grandham project

Week 2 of Grandham was filled with more architecture decisions and implementations.

Solving the versioning problem

It’s quite essential to store different user contributions on the same subject separately if the application supports crowdsourcing. We cannot let the user to overwrite the existing data without any moderation. Hence in Grandham, we have a ‘Book’ model with many ‘Submissions’ out of which one will be an approved submission. A book’s individual page basically has the detail from this approved submission.

Book has many submissions

It will be task force (a set of users with advanced privileges) who would approve submissions. They will have options to comment or even edit a submission.


Thanks to Travis CI, Grandham project now has a Continuous Integration server. We also use Gemnasium to track dependency packages.

GSoC 2013: Week 1 of Grandham project

The first week of development which started on 17th of June essentially comprises requirement analysis, planning and API implementation.

Requirement Analysis

Though the basic requirements for Grandham were discussed a month before, it was necessary to have more clarity on the subject. I had a series of discussions with mentors on various aspects of application, especially on how this project would attract information from contributors of different competency. It was sorted out that normal user could provide just the basic information such as Title, Author(s), Publisher(s), Year, Pages, Edition and Description. Advanced users, who are competent with MARC21 format could contribute advanced bibliographic information through a specially designed interface.

As mentioned in the previous post, we had a meeting with Sri K. H. Hussain. Apart from inspiring and guiding us, he made us aware about the importance of having an API and integration with various library systems in existence. I especially remember seeing a live instance of Koha and its super complex MARC21 data input form. It will be really great if it could consume Grandham API and vice versa so that we would have proper data sharing between two information resources.


I had little confusions around how to store information. I was thinking of extracting basic information from complex MARC21 data to display in individual books page. But since it was decided that users feed them differently, we could easily store basic information separately and use them. Separate data models to store information as Key/Value pair were written, but I removed them all for the sake of readability and easiness. It was this period when we decided to integrate Bootswatch for front end CSS framework.

API Implementation

Books and Fields API were written during period using jbuilder. The data importer script was updated to meet the changes in the application.

GSoC 2013: Updates on Grandham project

A couple of weeks before, We (Anivar, Manoj and me) had a meetup with Sri K. H. Hussain who is a pioneer in library science. He has worked in many projects including earlier version of Grandham ( and has been a member of its task force. He offered guidance for development and his inspiring vision for this project helped us to fix the goals.

(The trip for the meetup was fun. Though it was for Google ‘Summer’ of Code, we got drenched in the heavy monsoon of Kerala ;))

Meanwhile the coding period for GSoC had started, It was time to fasten the development process and get the project up as fast as possible. I had drawn few design mockups and have started implementing them.

2. Single book 3. Add new book (Basic fields) 4. Sign Up 5. Sign In 6. Task force moderation 1. Home









Most of these designs are just helpers to assist in development to get the backend and features done. We will do proper styling and UX enhancements after implementing the backend.

Currently the application has the following features,

  • Basic books listing and API
  • Bootswatch integration
  • Dynamic URLs in Navbar according to the language selected
  • Add book form
  • Edit book (which would save the data as as a new revision)
  • Decent test coverage

Live instance of the application in development is running here –

Regarding the representation of binary MARC21 data in relational database, we had decided to convert it to MARCXML and then it shouldn’t be a problem to store. It needs a bit more research on how many attributes and data fields it would take to completely express MARC data, hoping to write a separate post on that later.

That’s all for now. Happy hacking, folks!

Hello Google Summer of Code 2013

GSoC 2013This summer I’m contributing to Swathanthra Malayalam Computing through Google Summer of Code 2013 by developing an application called Grandham, a bibliography data project. I’ve three mentors for this project, namely Anivar Aravind, Mahesh Mukundan and Baiju Muthukadan.

Grandham is essentially a web application to find, contribute and share bibliographic information. It will be serving as an enhanced version of the live application we have now – The new application will have support for multiple languages, MARC21 data format, RESTful API, Internationalization and a lot more.

May 27 – June 17 is Community Bonding Period and I’m using this time to discuss and finalize various parts of application with mentors. So far we have decided how the application should collect information from users and how it should organize and store them internally. We need to strictly follow a scalable architecture and avoid internal data duplications.

The project is being built with Ruby on Rails web framework and MySQL as database. The first phase of development includes designing the rest of application and completion of basic API. It’s pretty exciting to build this project, You can follow the development at githubHappy Hacking!

Announcing – A Social Network based on Quiz

logo_small I’m very happy to announce my recent hobby project – It is a social network for people who love quiz.

I’ve been observing an increasing number of people in facebook who make and join groups to ask GK questions for information sharing and fun. But facebook group is not built for quizzing and the whole fun of finding the answer remains until the first guy finds and posts it as a comment. QuizGrid essentially solves this problem, answering and commenting on a question are different entities in QuizGrid. Also the support for questions with options is another advantage among many other features in this quizzing platform.

Unlike the traditional quiz pages over web, QuizGrid gives you a social network experience with the  feed of questions ranked according to the users you follow. QuizGrid also shows actions like ‘Bob and Alice answered this question’ or ‘John, Tom and 4 others commented on this question’, and so on if you follow Bob, Alice, John, Tom and those 4 other users. QuizGrid also has data crunching algorithms to show you relevant and concise real time notifications.

As a software developer, programming this project was immense of fun and challenging. I remember rewriting the entire code written using ORM for extracting relevant questions based on the actions of following users with raw SQL queries since the ORM implementation became very on a simulation test with thousands of questions and users. There’s definitely room for improvement and new features. Since it’s a hobby project, I develop QuizGrid during nights and weekends, I look forward to release more features and performance fixes in coming weekends.

Please sign up at if you are interested in Quiz and networking with similar minded people. Cheers!

Chrome extension for multilingual input using jQuery.ime

It was fun attending Wikipedia DevCamp Banglore edition at IIM Bangalore. I got the opportunity to meet lots of people and had a memorable time building interesting things with them. And it was then I pushed my first code contribution to wikipedia, which was a bug fix to mediawiki ULS extension :)

Here’s a chrome extension I built as a part of hackathon using which you can input text in many languages in browser. It uses the brilliant jQuery.ime library built by Wikimedia Foundation which has support for many languages. This extension is completely offline and it won a cool Wikipedia shoulder bag too 😉

Thanks to Santhoshettan who gave the initial spark for this idea and helped me throughout.

Chrome extension :


Event Photos:

Releasing – A web app to find and share campus fest and events!

I’m very happy to release my little hobby project, a place where you can find and share campus fests across India. The backend is written in ruby using Ruby on Rails framework and I’ve been having a nice time developing it for the last few weekends :)

The initial spark to build it was from the experience that I’ve been going to campus fests in Kerala and realized the necessity for a simpler way of sharing the information regarding fests. Many times I’ve seen students from other states (like from Rajasthan and Haryana) attending the fests in Kerala and I wondered how they got the necessary information. Most of them told they either got a personal invitation or came to know about fest from some website. I thought about the thousands of students who were missing the opportunities because they didn’t come to know about it on time. Also I’ve seen some campus fests without much of an outside participation. They were not able to promote the fest to interested students. I wanted to make a little solution for this and hence built Thanks to Arjun for the logo :)

To those who are interested in the technologies uses: It’s written in Ruby on Rails with  MySQL database and a little jQuery. It runs in a dedicated VPS with nginx and passenger.

I hope you liked it. If you happen to find any bug, please report it to  Thank you :)

Fix for Rails 3.x.x error “[FATAL] failed to allocate memory” in OS X Lion

This error happens when mysql2 gem is either installed for some other version of mysql or when it cannot find required dynamic library. To fix this error, uninstall mysql2 gem and install as follows

$ env ARCHFLAGS="-arch x86_64" gem install mysql2 -v='0.3.11' -- --with-mysql-dir=/usr/local/mysql --with-mysql-lib=/usr/local/mysql/lib --with-mysql-include=/usr/local/mysql/include --with-mysql-config=/usr/local/mysql/bin/mysql_config

Please change ‘/usr/local/mysql’ to your mysql installation path. Thank you.

Set up Rails 3.2.2 with Passenger, rvm and ngnix

Ruby on Rails is becoming more feature rich and powerful by every release. Naturally the steps to get it working in production environment are also being changed. I’ve been trying to set up Rails 3.2.2 for a while and here’s the method that finally worked. This method should work for the new Rails release, 3.2.3 too.

Install Server Operating System

We have a large pool of operating systems to select for our server. Though Debian Squeeze and CentOS have proved their stability in serving rails applications, I would prefer a LTS version of Ubuntu Server edition. I did a small survey, Many experienced programmers said they are using Ubuntu server as it’s easy to maintain, Packages are in plenty and has better stability. Also Canonical promises to give us 5 years of support for LTS edition operating systems. Current LTS edition is Ubuntu Server 10.04

Install RVM

Ruby Version Manager(RVM) helps to manage multiple ruby versions in a single machine. RVM helps us to quickly switch Ruby/Rails versions. Before installing RVM, we need to install git, curl and autoconf. Use the following command to do it:

$ sudo apt-get -y install git-core curl autoconf

Then install and configure RVM,

$ bash -s stable < <(curl -s

$ echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function' >> ~/.bash_profile

Source ~/.bash_profile to add rvm as a function to shell

$ source ~/.bash_profile

Install Ruby

Before installing Ruby, we need to install all dependency packages. The following command lists the dependency packages.

$ rvm requirements 

We can see a line with lots of package names, something like the following. Execute it directly with sudo in shell to install packages:

sudo /usr/bin/apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion 

Next step is to install Ruby. Current latest version of Ruby 1.9.3. Let’s install it

$ rvm install 1.9.3

It will take a few minutes to fetch source, configure and compile.

After completing installation, we can set 1.9.3 as the default version:

$ rvm use 1.9.3 --default

Install passenger

Passenger is a free module for Apache and nginx to run Ruby applications. Luckily it’s available as a Ruby gem and it’s easy to install and configure it.

Install passenger gem

$ gem install passenger

Install and configure nginx

$ rvmsudo passenger-install-nginx-module

This command downloads nginx source code, builds it and finally configures passenger for us. Default location of ngnix is /opt/nginx and we can find the configuration in /opt/nginx/conf/nginx.conf

If you open nginx configuration, you can see the following lines have been already added into it in ‘http’ configuration:

passenger_root /home/ershad/.rvm/gems/ruby-1.9.3-p125/gems/passenger-3.0.11;
passenger_ruby /home/ershad/.rvm/wrappers/ruby-1.9.3-p125/ruby;

Install Rails

Before installing rails, we will create a gemset. Gemset is feature of RVM where we can create multiple gemset to store different gems of different versions.

Let’s create a gemset for Rails 3.2.2

$ rvm gemset create rails322

$ rvm gemset use rails322

Installing rails is pretty straight forward.

$ gem install rails -v 3.2.2

If you are not interested in the documentation that comes along with rails gem, use the following command instead

$ gem install rails -v 3.2.2  --no-ri --no-rdoc 

Install nginx init script

nginx init script by Jason Giedymin helps us to administer web server easily.

$ cd
$ git clone git://
$ sudo mv rails-nginx-passenger-ubuntu/nginx/nginx /etc/init.d/nginx
$ sudo chown root:root /etc/init.d/nginx

Deploying application
Let’s store all rails applications under /var/rails_apps/ directory. Let’s make such a folder

$ sudo mkdir -p /var/rails_apps
$ sudo chmod 777 /var/rails_apps/ #giving full file permissions

Let’s create a sample rails application in rails_apps directory

$ cd /var/rails_apps
$ rails new helloworld
$ cd helloworld
$ vim Gemfile # and uncomment the line to include 'therubyracer' gem. We need a javascript runtime
$ bundle install
$ bundle exec rake assets:precompile #Precompile assets to public/ dir

Next step is point ngnix to this location. Add the following snippet in /opt/nginx/conf/nginx.conf

server {
listen 80;
rails_env production;
root /var/rails_apps/helloworld/public; # <--- be sure to point to 'public'!
passenger_enabled on;

Restart the server

 sudo /etc/init.d/nginx restart

Your application must be alive and running now! :)

Points to remember

1) Rails application doesn’t get updated when we change the code. This is because we need to restart passenger explicitly

Restarting passenger is easy, we just have to create a file ‘restart.txt’ in tmp/ dir of the application. For example

$ cd /var/rails_apps/helloworld

$ touch tmp/restart.txt

2) Always precompile assets after generating controller or scaffold

3) Make sure you are migrating in ‘production’ environment. This can be done using the following command.

 rake db:migrate RAILS_ENV=production

4) When you get errors related to routes, check the list of all routes

 $ rake routes 

5) When something goes wrong, see log/production.log

6) If you happen to get passenger errors related to missing gems, just add those gems in Gemfile and use the following command

 $ bundle install --path vendor/bundle 

That’s it. Happy hacking with Ruby on Rails. Thank you.


Python script for automatic synchronisation of ‘Read it later’ list and webpage

Adding to 'Read it later' list

One of the powerful features of firefox is that it has thousands of useful addons. We are going to deal with such an addon – Read it later. It allows us to book mark page to read later with just a click. I was happy living with it.

But on one fine day, Hrishiettan told me about sharing the links we read and its advantages. Sharing links will help many people to find interesting articles easily and it will also act as a link-index in case if we want to refer it again. The idea is really good and he told he would start working on a php app.

On another day, I told about this to Raghesh sir. He told there’s an option to export our list to html in ‘Read it later’. That’s it! They have an API too! Got the API key from their website, cleared all existing links as they contain irrelevant news links, and wrote following script. It now updates the links in every 5 minutes automatically from my ‘Read it later’ list.  Thank you :)

ps: The standard way of xml parsing didn’t work, that’s why I wrote it rude :)