PHP and MongoDB Web Development Beginner’s Guide – Thoughts of a first-time author

PHP and MongoDB web development

Social networking doesn’t always make you procrastinate, sometimes it pays off! When @packtauthors tweeted that they were looking for someone to author a book on PHP and MongoDB, I made contact. Few weeks later I signed a contract for writing the book. And six months after that, I am pleased to announce that PHP and MongoDB Web Development Beginner’s Guide is published and out for sale!

In this post I intend to share a few words about the motivation behind the book and the journey of a first time author.

The Motivation

I’m a supporter of the idea the MongoDB can potentially be the new M in LAMP. The web application data storage requirements have changed a lot during the past 4-5 years. Instead of producing contents of their own, the most popular websites are hosting contents created by their users. These contents are diverse in nature and humongous in volume. Mapping the diverse data into a rigid data structure gets harder as the volume grows. This is where the ‘Flexible Schema’ nature of MongoDB fits really well. Also MongoDB is easy to learn, developers with relational database experience should find little trouble adapting to it. There is a lot of similarity between the underlying concepts of an RDBMS and MongoDB (think documents for rows, and collections for tables). Developers don’t need to wrestle with radical ideas such as column-oriented or graph theory based data structures as some other NoSQL databases require them to. Finally, it is open-source, freely available (Creative Commons License), supports multiple platforms (Windows/Linux/OS X), have great documentation and a very co-operative community, and plays nicely with PHP! All these have lead me to believe that in near future MongoDB will be where MySQL is right now, the de facto database for web application development (I would urge you to read Stephen O’Grady’s article which makes more persuasive arguments). And since PHP is the dominating language for web programming, writing a book on web development with PHP and MongoDB felt just right.

The intended audience for this book are web developers who are completely new to MongoDB. It focuses on application development with PHP and MongoDB rather than focusing only on MongoDB. The first few chapters will try to ease the reader into understanding MongoDB by building a simple web application (a blog) and handling HTTP sessions with MongoDB as the data back-end. In the next chapters he will learn to solve ‘interesting’ problems, such as storing real-time web analytics, hosting and serving media content from GridFS, use geospatial indexing to build location-aware web apps. He will also brainstorm about scenarios where MongoDB and MySQL can be used together as a hybrid data back-end.

The Inspiration

Scott Adams, the creator of the famous Dilbert comic strip, wrote an inspirational article on Wall Street Journal. I’m going to quote a few lines here:

“I succeeded as a cartoonist with negligible art talent, some basic writing skills, an ordinary sense of humor and a bit of experience in the business world. The ‘Dilbert’ comic is a combination of all four skills. The world has plenty of better artists, smarter writers, funnier humorists and more experienced business people. The rare part is that each of those modest skills is collected in one person. That’s how value is created.”

These words moved me. I like programming and I like writing, and although there are smarter programmers and better writers out there, by combining these two passions I could potentially produce something. Besides I had an amazing learning experience with MongoDB. I built an API analytics solution with MySQL which became difficult to handle as the volume of the data grew. I started playing with MongoDB as a potential alternative. A month later I moved the entire data from MySQL to a more solid and scalable solution based on MongoDB. I wanted to share this learning experience through a series of blog posts but lacked the personal discipline and commitment to do so. Being obligated a deliver a book within tight deadlines solved that problem!

I also must thank Nurul Ferdous, my friend and former colleague who is a published tech author himself. His guidance and influence has been instrumental.

The Journey

My journey as an author writing a book for the first time has been an exhaustive yet amazing one! I work in a tech startup, which naturally requires longer than usual hours and harder than usual problems to solve. I would come home late and tired, research on MongoDB topics, plan how to deliver the message to the reader, write code, test and debug the code, write the content on a text editor, fight with Microsoft Word so the content has proper formatting as required by the publisher. Then on weekends I would revise and rewrite most of what I have done over the week and hustle to make the deadline. Nevertheless it all had been a rewarding experience.

In the rewrite phase I had a lot of help from the technical reviewers – Sam Millman, Sigert De Vries, Vidyasagar N V and Nurul Ferdous. They corrected my errors, showed me what more could be added to the content and what should be gotten rid off, helped me communicate complicated topics to readers in a clearer way. I convey my sincere appreciations to them!

Time to end this lengthy blog post. I hope you find this book enjoyable and use it to build some really cool PHP-MongoDB apps! I will then consider my endeavor to be a success.

Advertisements

MySQL Prepared Statements and PHP : A small experiment

Consider a PHP-MySQL application where the information of 1000 users is being retrieved from the database by running a for loop:

for($i = 1; $i <= 1000; $i++){

$query = "SELECT * FROM user WHERE user_id = $i";

//run the query and fetch data

}

In each iteration, the first thing the MySQL engine does is to parse the query for syntax check. Then it sets up the query and runs it. Since the query remains unchanged during each iteration(except for the value of user_id), parsing the the query each time is definitely an overhead. In such cases use of prepared statements is most convenient. A prepared statement is just like a typical query, except that it has ‘placeholders’ that are supplied values at run time. The prepared statement in this case will look like this:

"SELECT * FROM user WHERE user_id = ?"

Notice the placeholder(‘?’) for the value of user_id in the query. Now MySQL engine needs to parse the query only once, then execute it 1000 times by binding the placeholder with PHP script supplied value for user_id. This pre-parsing of the query results in a significant performance boost.

The MySQL Improved extension in PHP, more commonly known as MySQLi, provides an API to work with prepared statements. The documentation at the online PHP manual is good enough to get you started on how to use them on your PHP application, so I’ll not go through it. Instead, I am going to share the results of my personal experiments on comparing performances of traditional and prepared SQL statements.

I conducted the experiment on a demo project which has large amount of data. I wrote two separate scripts on our development server, both of which performed the same operation: joining two related tables (one of which has over 150,000 records, the other has 350,000) and fetching some data . One script used regular SQL statement, the other employed prepared statement techniques. Each script was executed three times and the time required to fetch the data was measured at each pass.

The First script: traditional SQL statement

//Get the Database link
$dbLink = getDBLink();


$timeStart = microtime(true);


for($i = 0; $i < 162038; $i++){


$query = "SELECT article_id, article_name, username as author FROM articles a LEFT JOIN user u ON (a.author_id = u.user_id) WHERE article_id = $i";


if($result = $dbLink->query($query))

$obj = $result->fetch_object();

else die("Failed to execute query: $dbLink->error");


$result->close();


}


$timeEnd = microTime(true);
$dbLink->close();


//measure the time difference
$timeDiff = $timeEnd - $timeStart;

echo "Total time: $timeDiff seconds";

Output:

First Pass -> Total time: 25.5793459415 seconds
Second Pass -> Total time: 25.1708009243 seconds
Third Pass -> Total time: 25.2259421349 seconds

Average: 25.32536300023 seconds

The Second Script : using prepared statement

$dbLink = getDBLink();

$query = "SELECT article_id, article_name, username as author FROM article a LEFT JOIN user u ON (a.author_id = u.user_id) WHERE article_id = ?";


$stmt = $dbLink->stmt_init();


if(!$stmt->prepare($query))
die("Failed to prepare statement: ".$dbLink->error);

$timeStart = microtime(true);

for($i = 0; $i < 162038; $i++) {

//bind the parameter
$stmt->bind_param('i',$i);
//execute the statement
$stmt->execute();
//bind the result, fetch it, then free it
$stmt->bind_result($articleId, $articleName, $author);
$stmt->fetch();
$stmt->free_result();

}

$timeEnd = microTime(true);

$stmt->close();
$dbLink->close();

//measure the time difference
$timeDiff = $timeEnd - $timeStart;

echo "Total time: $timeDiff seconds";

Output:

First Pass -> Total time: 20.1434290409 seconds
Second Pass -> Total time: 20.182309866 seconds
Third Pass -> Total time: 20.6448199749 seconds

Average: 20.32351962726 seconds

The task takes 20% less time for prepared statement, a significant performance boost.

Other than performance, it can also improve application security by guarding against SQL Injections. Check out this informative blog post on that topic.

Harrison Fisk at MySQL AB wrote a very good article on MySQL prepared statements. Don’t forget to check out the section ‘When should you use prepared statement?’ if you read it.