MongoDB Focused Articles

Storing polymorphic classes in MongoDb using C#

NoSql databases like Mongo give us advantage of storing our classes directly in data store without worrying too much about schemas.Thus saving us from object-relational impedance mismatch.Common scenario that arises from storing classes is how to handle inheritance hierarchies.

In this post I will discuss on how mongo db handles polymorphic classes or inheritance.I am using official c# driver for MongoDb.

To start with first thing to know is that MongoDb fully supports polymorphic classes and all the classes in your class hierarchies can be part of the same mongo collection.How it does that is with a  concept called Type Discriminators.

Type Discriminator

The way mongo distinguishes between various hierarchical types is by including a field with name ‘_t’  called type discriminator as shown below.


Lets consider a simple class hierarchy to illustrate the concept.


Point here is that while saving the data I am always going to use base class as shown below and type discriminators will help in serializing and de-serializing the actual type.

var writeConcernResult = _dataContext.ContentCollection.Save<ContentBase>(content);

There are multiple ways in which we can distinguish the type and hence mongo provides two inbuilt conventions called Type discriminator conventions.

  • ScalarDiscriminatorConvention : In this case ‘_t’ parameter contains by default the type name as shown above in the screenshot.
  • HierarchialDiscriminatorConvention : This convention comes into play when you specify one of the class in the hierarchy as something called root class as shown below

[BsonDiscriminator(RootClass = true)]
 public abstract  class ContentBase 

Additionally you need to specify the known types using BsonKnownTypes  attribute so that while de-serializing the content to objects correct type is created.

Below is how type discriminator is stored in this case.


Notice how the whole hierarchy is displayed as array.

Custom Type Discriminator Convention

You also have option of writing your custom type discriminator convention.Using custom convention you can change the way type discriminator is stored or what is stored.

For our example we will just change the name of element which stores type discriminator i.e. currently we have ‘_t’ and we will change it to say _contentType’.This may actually be required when you are working with different mongo drivers.

public class ContentTypeDiscriminatorConvention : IDiscriminatorConvention
        public string ElementName
            get { return "_contentType"; }
        public Type GetActualType(MongoDB.Bson.IO.BsonReader bsonReader, Type nominalType)
            var bookmark = bsonReader.GetBookmark();
            string typeValue = string.Empty;
            if (bsonReader.FindElement(ElementName))
                typeValue = bsonReader.ReadString();
                throw new NotSupportedException();
            return Type.GetType(typeValue);
        public MongoDB.Bson.BsonValue GetDiscriminator(Type nominalType, Type actualType)
            return actualType.Name;

Below are the results.


Recommended Books on Amazon:

MongoDB: The Definitive Guide

MongoDB Applied Design Patterns

50 Tips and Tricks for MongoDB Developers

This article is by Jagmeet Singh from

Working with Geospatial support in MongoDB: the basics

A project I’m working on requires storage of and queries on Geospatial data. I’m using MongoDB, which has good support for Geospatial data, at least good enough for my needs. This post walks through the basics of inserting and querying Geospatial data in MongoDB.

First off, I’m working with MongoDB 2.4.5, the latest. I initially tried this out using 2.2.3 and it wasn’t recognizing the 2dsphere index I set up, so I had to upgrade.

MongoDB supports storage of Geospatial types, represented as GeoJSON objects, specifically the Point, LineString, and Polygon types. I’m just going to work with Point objects here.

Once Geospatial data is stored in MongoDB, you can query for:

  • Inclusion: Whether locations are included in a polygon
  • Intersection: Whether locations intersect with a specified geometry
  • Proximity: Querying for points nearest other points

You have two options for indexing Geospatial data:

  • 2d : Calculations are done based on flat geometry
  • 2dsphere : Calculations are done based on spherical geometry

As you can imagine, 2dsphere is more accurate, especially for points that are further apart.

In my example, I’m using a 2dsphere index, and doing proximity queries.

First, create the collection that’ll hold a point. I’m planning to work this into the Sculptor code generator so I’m using the ‘port’ collection which is part of the ‘shipping’ example MongoDB-based project.

> db.createCollection("port") { "ok" : 1 } 

Next, insert records into the collection including a GeoJSON type, point. According to MongoDB docs, in order to index the location data, it must be stored as GeoJSON types.

> db.port.insert( { name: "Boston", loc : { type : "Point", coordinates : [ 71.0603, 42.3583 ] } }) 
> db.port.insert( { name: "Chicago", loc : { type : "Point", coordinates : [ 87.6500, 41.8500 ] } })  

> db.port.find()  

{ "_id" : ObjectId("51e47b4588ecd4e8dedf7185"), "name" : "Boston", "loc" : { "type" : "Point", "coordinates" : [  71.0603,  42.3583 ] } }
{ "_id" : ObjectId("51e47ee688ecd4e8dedf7187"), "name" : "Chicago", "loc" : { "type" : "Point", "coordinates" : [  87.65,  41.85 ] } } 

The coordinates above, as with all coordinates in MongoDB, are in longitude, latitude order.

Next, we create a 2dsphere index, which supports geolocation queries over spherical spaces.

> db.port.ensureIndex( { loc: "2dsphere" }) > 

Once this is set up, we can issue location-based queries, in this case using the ‘geoNear’ command:

> db.runCommand( { geoNear: 'port', near: {type: "Point", coordinates: [87.9806, 42.0883]}, spherical: true, maxDistance: 40000})  
     "ns" : "Shipping-test.port",
     "results" : [
             "dis" : 38110.32969523317,
             "obj" : {
                 "_id" : ObjectId("51e47ee688ecd4e8dedf7187"),
                 "name" : "Chicago",
                 "loc" : {
                     "type" : "Point",
                     "coordinates" : [
     "stats" : {
         "time" : 1,
         "nscanned" : 1,
         "avgDistance" : 38110.32969523317,
         "maxDistance" : 38110.32969523317
     "ok" : 1

For some reason, a similar query using ‘find’ and the ‘near’ operator, which should work, doesn’t:

> db.port.find( { "port" : { $near : { $geometry : { type : "Point", coordinates: [87.9806, 42.0883] } }, $maxDistance: 40000 } } )  
error: { 
"$err" : "can't find any special indices: 2d (needs index), 2dsphere (needs index),  for: { port: { $near: { $geometry: { type: \"Point\", coordinates: [ 87.9806, 42.0883 ] } }, $maxDistance: 40000.0 } }", 
"code" : 13038

This article is by Ron Smith from

Use Cases Of MongoDB

MongoDB is a relatively new contender in the data storage circle compared to giant like Oracle and IBM DB2, but it has gained huge popularity with their distributed key value store, MapReduce calculation capability and document oriented NoSQL features.

MongoDB has be rightfully acclaimed as the “Database Management System of the Year” by DB-Engines.

Along with these feature, MongoDB has numerous advantages when compared to the traditional RDBMS.  As a result, lots of companies are vying to employ MongoDB database. Here is a look at some of the real world use cases, where organisation, if not entirely are including at least as an addition to their existing databases.


Adhar is an excellent example of real world use cases of MongoDB. In recent times, there has been some controversy revolving around CIA’s non-profit Venture Capital arm, In-Q-Tel, backing the company which developed MongoDB. Putting aside the controversy, let’s look at the MongoDB’s role in Aadhar.

India’s Unique Identification project, aka Aadhar, is the world’s biggest biometrics database. Aadhar is in the process of capturing demographic and biometric data of over 1.2 billion residents. Aadhar has used MongoDB as one of its database to store this huge amount of data. MongoDB was among several database products, apart from MySQL, Hadoop and HBase, originally procured for running the database search. Here, MySQL is used for storing demographic data and MongoDB is used to store images. According to, MongoDB has nothing to do with the “sensitive” data.


Shutterfly is a popular Internet-based photo sharing and personal publishing company that manages a store of more than 6 billion images with a transaction rate of up to 10,000 operations per second. Shutterfly is one of the companies that transitioned from Oracle to MongoDB.

During the evaluation at the time of transitioning to MongoDB, it became apparent that a non-relational database would be a better suited for Shutterfly’s data needs and there by possibly improving programmer productivity as well as performance and scalability.

Shutterfly considered a wide variety of alternate database systems, including Cassandra, CouchDB and BerkeleyDB, before settling on the MongoDB. Shutterfly has installed MongoDB for metadata associated with uploaded photos. And for those parts of the application which require richer transactional model, like billing and account management, the traditional RDBMS is still in place.

Till now, Shutterfly is happy with its decision of transitioning to MongoDB and this is verified by Kenny Gorman’s (Data Architect of Shutterfly) statement, “I am a firm believer in choosing the correct tool for the job, and MongoDB was a nice fit, but not without compromises.”


MetLife is a leading global provider of insurance, annuities and employee benefit programs. They serve about 90 million customers and hold leading market positions in the United States, Japan, Latin America, Asia, Europe and the Middle East. MetLife uses MongoDB for “The Wall”, an innovative customer service application that provides a consolidated view of MetLife customers, including policy details and transactions. The Wall is designed to look and function like Facebook and has improved customer satisfaction and call centre productivity. The Wall brings together data from more than 70 legacy systems and merges it into a single record. It runs across six servers in two data centres and presently stores about 24 terabytes of data. MongoDB-based applications are part of a series of Big Data projects that MetLife is working on to transform the company and bring technology, business and customers together.


eBay is an American multinational internet consumer-to-consumer corporation, headquartered in San Jose. eBay has a number of projects running on MongoDB for search suggestions, metadata storage, cloud management and merchandizing categorization.

The above is just a hint at companies using MongoDB is .Here is a comprehensive list of all the companies using MongoDB. Most of the companies in this list, use MongoDB as their primary database.

This article is by bigdata from

How MongoDB’s Journaling Works

I was working on a section on the gooey innards of journaling for The Definitive Guide, but then I realized it’s an implementation detail that most people won’t care about. However, I had all of these nice diagrams just laying around.

Good idea, Patrick!

So, how does journaling work? Your disk has your data files and your journal files, which we’ll represent like this:

When you start up mongod, it maps your data files to a shared view. Basically, the operating system says: “Okay, your data file is 2,000 bytes on disk. I’ll map that to memory address 1,000,000-1,002,000. So, if you read the memory at memory address 1,000,042, you’ll be getting the 42nd byte of the file.” (Also, the data won’t necessary be loaded until you actually access that memory.)

This memory is still backed by the file: if you make changes in memory, the operating system will flush these changes to the underlying file. This is basically how mongod works without journaling: it asks the operating system to flush in-memory changes every 60 seconds.

However, with journaling, mongod makes a second mapping, this one to a private view. Incidentally, this is why enabling journalling doubles the amount of virtual memory mongod uses.

Note that the private view is not connected to the data file, so the operating system cannot flush any changes from the private view to disk.

Now, when you do a write, mongod writes this to the private view.

mongod will then write this change to the journal file, creating a little description of which bytes in which file changed.

The journal appends each change description it gets.

At this point, the write is safe. If mongod crashes, the journal can replay the change, even though it hasn’t made it to the data file yet.

The journal will then replay this change on the shared view.

Then mongod remaps the shared view to the private view. This prevents the private view from getting too “dirty” (having too many changes from the shared view it was mapped from).

Finally, at a glacial speed compared to everything else, the shared view will be flushed to disk. By default, mongod requests that the OS do this every 60 seconds.

And that’s how journaling works. Thanks to Richard, who gave the best explanation of this I’ve heard (Richard is going to be teaching an online course on MongoDB this fall, if you’re interested in more wisdom from the source).

Kristina Chodorow

Kristina Chodorow

Software engineer at Google working [email protected], author of several O’Reilly books on MongoDB.

This article is by Kristina Chodorow from

12 Months with MongoDB

As previously blogged, Wordnik is a heavy user of 10gen’s MongoDB. One year ago today we started the investigation to find an alternative to MySQL to store, find, and retrieve our corpus data. After months of experimentation in the non-relational landscape (and running a scary number of nightly builds), we settled on MongoDB. To mark the one-year anniversary of what ended up being a great move for Wordnik, I’ll describe a summary of how the migration has worked out for us.



The primary driver for migrating to MongoDB was for performance. We had issues with MySQL for both storage and retrieval, and both were alleviated by MongoDB. Some statistics:

  • Mongo serves an average of 500k requests/hour for us (that does include nights and weekends). We typically see 4x that during peak hours
  • We have > 12 billion documents in Mongo
  • Our storage is ~3TB per node
  • We easily sustain an insert speed of 8k documents/second, often burst to 50k/sec
  • A single java client can sustain 10MB/sec read over the backend (gigabit) network to one mongod. Four readers from the same client pull 40MB/sec over the same pipe
  • Every type of retrieval has become significantly faster than our MySQL implementation:

– example fetch time reduced from 400ms to 60ms
– dictionary entries from 20ms to 1ms
– document metadata from 30ms to .1ms
– spelling suggestions from 10ms to 1.2ms

One wonderful benefit to the built-in caching from Mongo is that taking our memcached layer out actually sped up calls by 1-2ms/call under load. This also frees up many GB of ram. We clearly cannot fit all our corpus data in RAM so the 60ms average for examples includes disk access.



We’ve been able to add a lot of flexibility to our system since we can now efficiently execute queries against attributes deep in the object graph. You’d need to design a really ugly schema to do this in mysql (although it can be done). Best of all, by essentially building indexes on object attributes, these queries are blazingly fast.

Other benefits:

  • We now store our audio files in MongoDB’s GridFS. Previously we used a clustered file system so files could be read and written from multiple servers. This created a huge amount of complexity from the IT operations point of view, and it meant that system backups (database + audio data) could get out of sync. Now that they’re in Mongo, we can reach them anywhere in the data center with the same mongo driver, and backups are consistent across the system.
  • Capped collections. We keep trend data inside capped collections, which have been wonderful for keeping datasets from unbounded growth.



Of course, storing all your critical data in a relatively new technology has its risks. So far, we’ve done well from a reliability standpoint. Since April, we’ve had to restart Mongo twice. The first restart was to apply a patch on 1.4.2 (we’re currently running 1.4.4) to address some replication issues. The second was due to an outage in our data center. More on that in a bit.



This is one challenge for a new player like MongoDB. The administrative tools are pretty immature when compared with a product like MySQL. There is a blurry hand-off between engineering and IT Operations for this product, which is something worth noting. Luckily for all of us, there are plenty of hooks in Mongo to allow for good tools to be built, and without a doubt there will be a number of great applications to help manage Mongo.

The size of our database has required us to build some tools for helping to maintain Mongo, which I’ll be talking about at MongoSV in December. The bottom line is yes–you can run and maintain MongoDB, but it is important to understand the relationship between your server and your data.

The outage we had in our data center caused a major panic. We lost our DAS device during heavy writes to the server–this caused corruption on both master and slave nodes. The master was busy flushing data to disk while the slave was applying operations via oplog. When the DAS came back online, we had to run a repair on our master node which took over 24 hours. The slave was compromised yet operable–we were able to promote that to being the master while repairing the other system.

Restoring from tape was an option but keep in mind, even a fast tape drive will take a chunk of time to recover 3TB data, let alone lose the data between the last backup and the outage. Luckily we didn’t have to go down this path. We also had an in-house incremental backup + point-in-time recovery tool which we’ll be making open-source before MongoSV.

Of course, there have been a few surprises in this process, and some good learnings to share.

Data size


At the MongoSF conference in April, I whined about the 4x disk space requirements of MongoDB. Later, the 10gen folks pointed out how collection-level padding works in Mongo and for our scenario–hundreds of collections with an average of 1GB padding/collection–we were wasting a ton of disk in this alone. We also were able to embed a number of objects in subdocuments and drop indexes–this got our storage costs under control–now only about 1.5-2x that of our former MySQL deployment.



There are operations that will lock MongoDB at the database level. When you’re serving hundreds of requests a second, this can cause requests to pile up and create lots of problems. We’ve done the following optimizations to avoid locking:

  • If updating a record, we always query the record before issuing the update. That gets the object in RAM and the update will operate as fast as possible. The same logic has been added for master/slave deployments where the slave can be run with “–pretouch” which causes a query on the object before issuing the update
  • Multiple mongod processes. We have split up our database to run in multiple processes based on access patterns.

In summary, life with MongoDB has been good for Wordnik. Our code is faster, more flexible and dramatically smaller. We can code up tools to help out the administrative side until other options surface.

Hope this has been informative and entertaining–you can always see MongoDB in action via our public api.


Tony Tam

Tony Tam

Strongly opinionated generalist, swagger committer and VP at News Inc.. WordNik

This article is by tonytam from

Interacting with MongoDB using Rails 3 and MongoMapper

MongoDB is an opensource document-oriented database in the vein of CouchDB. It’s been a while since I wanted to try this kind of database on a Rails project. After reading this nice tutorial today I decided to take some time to create a sample Rails 3 app and put it on github.

I chose to use MongoMapper, a ruby object mapper for Mongo. MongoMapper uses ActiveModel and let you interact with a MongoDB database in a very ActiveRecord way.

Hope this sample app will help you getting started with MongoDB!

François Lamontagne

François Lamontagne

I live in the city of Trois-Rivières (three rivers) in Quebec. I am a freelancer and I specialize in Web Development Ruby Fleebie

This article is by Frank from

Android Login Registration System with Node.js and MongoDB – Server #1

In this series of tutorials we are going to see how to develop a complete android login registration system with Node.js platform and MongoDB document oriented database.

What we will do ?

-> Setup Node.js in Windows and Ubuntu.
-> Registration, Login in Android and Store Data in MongoDB.
-> Reset Password with Email.
-> User Profile page with Gravatar.
-> SHA512 encryption for Passwords.

Why Node.js ?

-> Hottest new technology.
-> Node.js is JavaScript running on the Server.
-> Runs on Chrome V8 JavaScript engine.
-> Its Non blocking and Event driven.
-> Faster than PHP.
-> Does not need additional Server stack such as Apache or NGINX, It has inbuilt http module.
-> JavaScript is easier to learn.
-> Inbuilt Package Manager to install modules.
-> JSON is easier to Use.

Android Login Registration System with Node.js and MongoDB

Node.JS vs Other

Why MongoDB ?

-> NoSQL database.
-> Data is stored in Objects rather than tables in MySQL.
-> Easier to work with Node.js.
-> Stores data in JSON format.

Our tutorial has two major parts

1. Setting up Server with Node.js and MongoDB
2. Develop client side Android application to interact with MongoDB.

Setting up Server Side


See the following tutorial to setup Node.js and MongoDB in your Linux or Windows machines.

How to Setup Node.js and MongoDB in Ubuntu and Windows

Use the GUI client Robomongo to access MongoDB. It is available for all the platforms.

Download Complete Project

Creating the project

-> Create a MongoDB database named node-android.
-> Create a directory named node-android and change to the directory.
-> Create package.json file and include the following dependencies.


     "name": "node-android",     
     "version": "0.0.1",     
     "main": "app.js",   
     "dependencies": {       
          "express" : "~4.0.0",       
          "mongoose" : "~3.8.8",      
          "csprng" : "~0.1.1",        
          "connect": "~2.14.4",       

-> Now enter npm install in linux terminal or windows command prompt.
-> Now all the packages begins downloading and it is available in the directory node_modules.

Android Login Registration System with Node.js and MongoDB Android Login Registration System with Node.js and MongoDB

Lets see why we use these packages.


express is a Node.js framework. It is used to simplify some of the basic tasks.


mongoose is a Node.js module which is used to interact easily with MongoDB databases.


cspring is used to generate random strings which is used as salt which is used to generate more secure hashed password.


Connect is a express middleware which is used to parse form data, and to use logger which prints the request in the terminal.

Android Login Registration System with Node.js and MongoDB

Connect Logger


nodemailer is used to send mail using SMTP for resetting the user password.


crypto is a Node.js module used to generate SHA512 hashes.


This simple Node.js module is used to get the Gravatar Image URL of the user.

Continued on Next Page.

Here is the diagram representing Our Project structure in the Server.

Android Login Registration System with Node.js and MongoDB

Creating app.js

Create a file named app.js in the root of the node-android folder. It is the main file which makes our server up and run. Here our application runs on the port 8080 in localhost.


 * Module dependencies.  
var express  = require('express'); 
var connect = require('connect'); 
var app      = express(); 
var port     = process.env.PORT || 8080;  
// Configuration 
app.use(express.static(__dirname + '/public')); 
// Routes  



console.log('The App runs on port ' + port);


Setting up routes with routes.js

Our routes.js file handles all the HTTP requests such as GET, POST. We need to create our own routes to handle login, register, change password events, forgot password events. The GET requests are defined by app.get nethod and POST are defined by methods. The functions to handle login, registration and change password are login.js, register.js, chgpass.js which will be placed in config directory inside the node_modules directory. We are importing the defined function using require(”). Login module is imported using require(‘config/login’) . config is the directory of the login module.

For the requests to the URL we would call the function which is predefined to perform the operation. For example for register operation we would post the email and password parameters to the URL from our Android Application and Node.js handles the register operation by calling the register function and returns the result back to the user in JSON format.'/register',function(req,res){ 
var email =; // Getting the parameters 
var password = req.body.password;  

register.useremail(email,password,function (found) { //Register function to perform register event 
console.log(found); // Prints the results in Console.(Optional) 
res.json(found); // Returns the result back to user in JSON format 

Similar to register we need to define our routes for other operations. Routes which are not defined are send with 404 error. The complete routes.js which has routes to all the operations. Our routes.js file is placed in the routes directory.


var chgpass = require('config/chgpass'); 
var register = require('config/register'); 
var login = require('config/login');   

module.exports = function(app) {        

     app.get('/', function(req, res) {       

     });  '/login',function(req,res){        
          var email =;             
               var password = req.body.password;       

          login.login(email,password,function (found) {           
     });  '/register',function(req,res){         
          var email =;             
               var password = req.body.password;       

          register.register(email,password,function (found) {             
     });  '/api/chgpass', function(req, res) {       
          var id =;                 
               var opass = req.body.oldpass;         
          var npass = req.body.newpass;       

     });  '/api/resetpass', function(req, res) {         

          var email =;         

     });  '/api/resetpass/chg', function(req, res) {         
          var email =;         
          var code = req.body.code;       
          var npass = req.body.newpass;       


Defining MongoDB database Schema

Our schema defines the database stucture. All our data are of String type. It is defined in JSON format. Email is defined such as “email : String”. Connection with MongoDB is established using mongoose.connect(). The connection url is “mongodb://localhost:27017/node-android”. 27017 is the port number of MongoDB. node-android is our Database name. You can also add username and password to your database to make it more secure. The user schema is defined globally in models.js so that we can use easily in all our modules.


var mongoose = require('mongoose');  

var Schema = mongoose.Schema;  

var userSchema = mongoose.Schema({    
     token : String,     
     email: String,  
     hashed_password: String,    
     salt : String,  

module.exports = mongoose.model('users', userSchema);

Defining register module with register.js

The register module has a global register function which has parameters email, password and a callback. Initially we check whether the Email is valid or not, if not alerts the user. Next is to check the password strengh. The password length should be more than 4, it should contain caps, numbers and a special character. You can change it according to your needs. We are encrypting our data by using salt- a random string added to password and then the password is hashed using SHA512 encryption and then it is stored in the database. A token is also stored which is used to change password. It is very hard to hack these kind of passwords with salt. If the user is sucessfully registered response message is sent as json format.


var crypto = require('crypto'); 
var rand = require('csprng'); 
var mongoose = require('mongoose'); 
var user = require('config/models');    

exports.register = function(email,password,callback) {  

var x = email; 
if (password.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) && password.length > 4 && password.match(/[0-9]/) && password.match(/.[!,@,#,$,%,^,&,*,?,_,~]/)) {  

var temp =rand(160, 36); 
var newpass = temp + password; 
var token = crypto.createHash('sha512').update(email +rand).digest("hex"); 
var hashed_password = crypto.createHash('sha512').update(newpass).digest("hex");  

var newuser = new user({    
     token: token,   
     email: email,   
     hashed_password: hashed_password,   
     salt :temp });  

user.find({email: email},function(err,users){  

var len = users.length;  

if(len == 0){ (err) {   

     callback({'response':"Sucessfully Registered"});  


     callback({'response':"Email already Registered"});  

     callback({'response':"Password Weak"});  


callback({'response':"Email Not Valid"});  

Defining login module with login.js

In this module login function is defined which takes email, password parameters with callback. The password from the user is added with salt from the database and hashed. The generated hash and hash from database are compared. If its equal return the token to the user with gravatar image url of the users email address in JSON format. If not alert messages are returned.


var crypto = require('crypto'); 
var rand = require('csprng'); 
var mongoose = require('mongoose'); 
var gravatar = require('gravatar'); 
var user = require('config/models');  

exports.login = function(email,password,callback) {  

user.find({email: email},function(err,users){  

if(users.length != 0){  

var temp = users[0].salt; 
var hash_db = users[0].hashed_password; 
var id = users[0].token; 
var newpass = temp + password; 
var hashed_password = crypto.createHash('sha512').update(newpass).digest("hex"); 
var grav_url = gravatar.url(email, {s: '200', r: 'pg', d: '404'}); 
if(hash_db == hashed_password){  

callback({'response':"Login Sucess",'res':true,'token':id,'grav':grav_url});  


callback({'response':"Invalid Password",'res':false});  

}else {  

callback({'response':"User not exist",'res':false});  

Defining chgpass module with chgpass.js

This module has the functions to change the user password in profile page and reset the password if the user has forgotten his/her password. The cpass function defined is used to change the password in profile page by getting old password, new password and token as parameters. If the old password and the password stored in the database match new password is set or error message is returned.

Next function is the respass_init function which is used to send a random string to the User’s email address to reset the password. We use nodemailer module to send mail using SMTP.

Next function is the respass_chg function which gets the code which is sent as email, new password, email as parameters. If the code matches with the one which is sent new password is set or error is displayed.


var crypto = require('crypto'); 
var rand = require('csprng'); 
var mongoose = require('mongoose'); 
var nodemailer = require('nodemailer'); 
var user = require('config/models');  

var smtpTransport = nodemailer.createTransport("SMTP",{     
     auth: {         
          user: "[email protected]",        
          pass: "********"        

exports.cpass = function(id,opass,npass,callback) {  

var temp1 =rand(160, 36); 
var newpass1 = temp1 + npass; 
var hashed_passwordn = crypto.createHash('sha512').update(newpass1).digest("hex");  

user.find({token: id},function(err,users){  

if(users.length != 0){  

var temp = users[0].salt; 
var hash_db = users[0].hashed_password; var newpass = temp + opass; 
var hashed_password = crypto.createHash('sha512').update(newpass).digest("hex");   

if(hash_db == hashed_password){ 
if (npass.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) && npass.length > 4 && npass.match(/[0-9]/) && npass.match(/.[!,@,#,$,%,^,&,*,?,_,~]/)) {  

user.findOne({ token: id }, function (err, doc){   
     doc.hashed_password = hashed_passwordn;   
     doc.salt = temp1;;  

callback({'response':"Password Sucessfully Changed",'res':true});  

callback({'response':"New Password is Weak. Try a Strong Password !",'res':false});  


callback({'response':"Passwords do not match. Try Again !",'res':false});  


callback({'response':"Error while changing password",'res':false});  



exports.respass_init = function(email,callback) {  

var temp =rand(24, 24); 
user.find({email: email},function(err,users){  

if(users.length != 0){   

user.findOne({ email: email }, function (err, doc){   
     doc.temp_str= temp;;  

var mailOptions = {     
     from: "Raj Amal  <[email protected]>",     
     to: email,     
     subject: "Reset Password ",     
     text: "Hello "+email+".  Code to reset your Password is "+temp+".nnRegards,nRaj Amal,nLearn2Crack Team.",  


smtpTransport.sendMail(mailOptions, function(error, response){     

callback({'response':"Error While Resetting password. Try Again !",'res':false});      


callback({'response':"Check your Email and enter the verification code to reset your Password.",'res':true});      



callback({'response':"Email Does not Exists.",'res':false});  


exports.respass_chg = function(email,code,npass,callback) {   

user.find({email: email},function(err,users){  

if(users.length != 0){  

var temp = users[0].temp_str; 
var temp1 =rand(160, 36); 
var newpass1 = temp1 + npass; 
var hashed_password = crypto.createHash('sha512').update(newpass1).digest("hex");  

if(temp == code){ 
if (npass.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) && npass.length > 4 && npass.match(/[0-9]/) && npass.match(/.[!,@,#,$,%,^,&,*,?,_,~]/)) { 

user.findOne({ email: email }, function (err, doc){   
     doc.hashed_password= hashed_password;   
     doc.salt = temp1;   
     doc.temp_str = "";;  

callback({'response':"Password Sucessfully Changed",'res':true});  


callback({'response':"New Password is Weak. Try a Strong Password !",'res':false});  


callback({'response':"Code does not match. Try Again !",'res':false});  




Now we have set the server part. Now switch to the root of our project and enter the command node app. Now our app will be running on the port 8080 in localhost. Open the browser and hit you will see the message “Node-Android-Project” in your browser.

Next proceed to the Client part- Developing Android Application.

Raj Amal

Raj Amal

Love to work with Computers, Smartphones, Programming Android. Working @Learn2Crack Learn2Crack

Raj Amal

Raj Amal

Love to work with Computers, Smartphones, Programming Android. Working @Learn2Crack Learn2Crack

This article is by Raj Amal W from

Implementing Jetty Session Persistence in MongoDB

At SecondMarket, we’re moving towards a development model where not only are the deployments continuous but where deploys incur no downtime. Users should not notice if we take a portion of our servers out for maintenance, even if they’re logged into the site and have an active session. We decided to tackle this problem by persisting Java sessions to external storage. This allows another Jetty to take over serving of existing sessions if we decide to take down a Jetty for maintenance.

Evaluating Options: JDBC, Mongo or Memcache?

There are a number of options for persisting sessions in Jetty to an external engine. The oldest and most well-known technology is to use an SQL database via the JDBC session manager. We already use PostgreSQL as our main relational database, but it’s a critical part of our infrastructure and we weren’t sure we wanted to put session data on the same system; session data definitely doesn’t have the same criticality. We were also concerned about the performance implications of a relational database in this use-case.

Instead, we looked at two NoSQL options via the Jetty NoSQL session plugin available in Jetty 8: MongoDB and memcached. We ultimately settled on MongoDB, not only because it’s the reference implementation for NoSQL sessions but because we have operational experience with MongoDB. (We store some non-critical information in it, like news feeds about companies on the platform.)

Configuring the SessionManager and SessionIDManager

There are two managers to configure in Jetty: the session manager and the ID manager:

  • The session ID manager ensures that session IDs are unique across all webapps hosted on a Jetty instance, and thus there should only be one session ID manager per Jetty instance.
  • The session manager handles the session lifecycle (create/update/invalidate/expire) on behalf of a web application, so there is one session manager per web application instance.

I decided to configure the session ID manager for each Jetty instance using a separate XML file external to Jetty, calling it jetty-mongo-sessions.xml. This way I could either include or not include it in Jetty’s start.ini as circumstances required. Here’s what I used:

(I’m not a Jetty wizard so I realize I probably could have done this with more <Set> clauses rather than <Call> clauses. Feel free to edit my Gist if you can improve the syntax.)

Naturally, this file is written by Chef. In my Jetty sessions recipe, I do a search on all nodes that are Mongo replicaset members, and dynamically build the ArrayList. I then configure the session manager for each webapp in its context file with something like this:

Odds and Ends

A couple other odds and ends and we’re ready to go:

  • The NoSQL extensions don’t come with the core Jetty distribution, so you have to download them from either Codehaus or Eclipse’s website (depending on which variant you use)
  • The MongoDB Java driver has to appear somewhere in Jetty’s class path. I built an RPM for it that drops it in /usr/share/java, and then I just symlink it into Jetty’s lib directory.
  • start.ini has to include the nosql extensions in OPTIONS= in addition to specifying the jetty-mongo-sessions.xml as another config file to read.

Firing It Up

If you start up Jetty now, it should connect to MongoDB and automatically create both the database and collection to hold the session data.

Pitfalls and Warnings

All this worked fine when we were running on a development environment, but apps started to break once deployed to a clustered Mongo environment. A couple things we discovered:

  • One of our apps is written in Lift and we are using some features of Lift that are incompatible with clustered session storage. For speed, the Lift developers have made these features work only with memory-based sessions, so we have had to turn off Mongo sessions for this app; it can’t be clustered.
  • Mongo is an eventually-consistent database system, so if you’re writing data to a replicaset (master) and then read from a slave, you may or may not get the data you just wrote. That’s because in order for the data to make it to the slave, it has to be written to the master’s journal and then replicated across the wire to the slave. So I’d strongly recommend not turning slaveOkon.
  • More seriously, we discovered that developers were using in-memory sessions to store long-lived objects, rather than using a distributed object cache like EHcache. Session storage is supposed to be a short-lived stack onto which one pushes things needed for the next page view, where they’re popped off. When using in-memory sessions, direct manipulation of the HTTPSession object (via setAttribute()) leads to correct behavior: session data is magically updated. But with NoSQL or JDBC sessions, the session isn’t persisted to the backing store immediately, only when dirty and after the active request count goes to zero. In a distributed cluster without session affinity, this can cause inconsistency: node 1 writes data to the session, the user is sent on their way, only to hit node 2, which tries to read the session from disk & a race condition occurs because the session hasn’t been written there yet by node 1. The long-term solution, of course, will be for us to implement the aforementioned distributed object cache. At the last minute we were forced to set setSavePeriod(-2) in our Jetty config to force session data to be persisted to disk every time setAttribute() is called. (Thanks to my colleague Cal Watson for finding this. He was nominated for, and won, the monthly SecondMarket Peer Bonus award for his -2.)


Using MongoDB as a backing store for session data is absolutely feasible; the NoSQL extensions provided by Jetty are high-quality. In retrospect, with the misuse of session data above, there was no other solution that would have worked for us, so we accidentally happened across the one that was ideal. Had we implemented JDBC (or even Memcache) sessions, we would have been in serious trouble; the sessions collection is about 5.6 GB. This sort of raw, random data turns out to be an excellent fit for Mongo.

Implementing disk-based sessions also moves us one step closer to zero-downtime continuous deploys, and we’re looking forward to cleaning up the rest of our architecture to make that a reality.

Julian C. Dunn

Julian C. Dunn

Product manager at @chef. Interested in tech, startups, urban issues, transportation, journalism. Was a reporter for a hot NY minute. Married to @meredi.

This article is by Julian Dunn from

Configure remote connection with MongoDb (debian)

Lastest MongoDb package on debian is bind to, this address doesn’t allow the connection by remote hosts, to change it u must set bind to for eg
[email protected]:/var/www# nano /etc/mongodb.conf

bind_ip =
port = 27017

[email protected]:/var/www/lithium# /etc/init.d/mongodb restart
Done! Remember to secure the connection by password in production mode.



portfolio [eng] .fundation [pl] Aeon Media

This article is by Andrzej Grzegorz Borkowski from

MongoDB: Mongodump terminate called after throwing an instance of ‘std::runtime_error’

If you encounter this error:

1.| connected to:
2.| Mon Oct 21 10:49:30.638 DATABASE: soft_production to dump/soft_production
3.| terminate called after throwing an instance of 'std::runtime_error'
4.| what(): locale::facet::_S_create_c_locale name not valid
5.| Aborted

Please add this

1.|export LC_ALL=C
2.| # or
3.| export LC_ALL="en_US.UTF-8"

either in the console (for current session) or in .bashrc file.


After that you should be ready to go with:

1.| mongodump --db soft_production

Maciej Mensfeld

This article is by Maciej Mensfeld from