Comment:
Why Google cannot store the complete map easily? For instance, for 1000 nodes, we need to store just around 500 × 999 units of data.
Follow-up:
While the analysis for 1000 nodes is correct, when you scale it up to 1 million nodes, the data storage becomes very big. Indeed, the file size of the street map for the U.S.A. is around 9GB (for details, please visit: www.esri.com). Thus, for the entire world with all the details (e.g., restaurants, shops, timing data, etc.), the data size would be even larger. Of course, Google still can “easily” handle all these. But to run an algorithm (e.g., Dijkstra) over such a large piece of data is still quite daunting. That is why we really have to use the “pre-compute-then-table-lookup” approach as far as we can.
Comment:
What kind of programming language is used for implementing MapQuest and Google Map?
Follow-up:
For high-performance applications, I strongly believe that their engineers are still relying on the C language. However, for other system components such as user interface, Web, etc., they use a variety of other tools (e.g., Java, Perl, Python, etc.).
In order to make informed decisions in this information age, everyone needs to have an efficient way to sift through and evaluate the myriads of information that is available through the internet. The ultimate objective of this course (HKU CCST9003) is to help students develop a “computational” state of mind for everyday events. We will also discuss intensively the societal impacts of computing technologies on our daily life.
Thursday, July 21, 2011
Computations in Google Map
Labels:
Dijkstra,
dynamic programming,
Google,
Google Map,
MapQuest,
random thought
Subscribe to:
Post Comments (Atom)
You are absolutely right that handling such huge data is not a child's game.And good algorithms need to be maintained to serve the contents at good speed.Nice post
ReplyDelete