Thursday, July 21, 2011

Computations in Google Map

Comment:

Why Google cannot store the complete map easily? For instance, for 1000 nodes, we need to store just around 500 × 999 units of data.

Follow-up:

While the analysis for 1000 nodes is correct, when you scale it up to 1 million nodes, the data storage becomes very big. Indeed, the file size of the street map for the U.S.A. is around 9GB (for details, please visit: www.esri.com). Thus, for the entire world with all the details (e.g., restaurants, shops, timing data, etc.), the data size would be even larger. Of course, Google still can “easily” handle all these. But to run an algorithm (e.g., Dijkstra) over such a large piece of data is still quite daunting. That is why we really have to use the “pre-compute-then-table-lookup” approach as far as we can.

Comment:

What kind of programming language is used for implementing MapQuest and Google Map?

Follow-up:

For high-performance applications, I strongly believe that their engineers are still relying on the C language. However, for other system components such as user interface, Web, etc., they use a variety of other tools (e.g., Java, Perl, Python, etc.).

1 comment:

  1. You are absolutely right that handling such huge data is not a child's game.And good algorithms need to be maintained to serve the contents at good speed.Nice post

    ReplyDelete