Showing posts with label random thought. Show all posts
Showing posts with label random thought. Show all posts

Monday, November 14, 2011

Security and cryptography

Comment 1:

Security is a “system” concept.

Follow-up:
Yes it is very important for us (the users) to understand this so that we will not have a false sense of security when we are “educated” that our data are encrypted. Now you know that data encryption is just part of the whole process. Anything goes wrong in other parts of the system, security cannot be promised.

Comment 2:

HTTPS protocol?


Follow-up:
This is the so-called “secure” version of the HTTP protocol. Basically, this protocol transports
encrypted data instead of sending data in plaintext. The data is usually encrypted using a symmetric key system for which the shared key has to be agreed using a public key approach. Please refer to Problem 3 of Tutorial 5 for the design of such a key set-up protocol.

Comment 3:

Stealing bank account information from the Internet?


Follow-up:
Yes whether you like it or not, this kind of things are believed to be happening all the time! The thing is it is not very difficult to identify a “weakest” link in the system (e.g., a particular e-commerce Web site). It is widely believed that after such a system is broken, the hacker will not just use the bank account information (e.g., for buying things) but he/she will hold the bank and/or the e-commerce Web site for ransoms.

Comment 4:

What is symmetric key cryptography?


Follow-up:
Symmetric key system has always been the most important way for data confidentiality, despite that public key system is shown to be more versatile and “strong”. The reason is that symmetric key algorithms are usually much much faster than the public key algorithms. In a typical symmetric key system, a shared key has to be agreed upon through some means (see Comment 2 above). Then, the communicating parties will use the shared key for doing encryption/decryption.

Comment 5:

Are there any more sophisticated cryptography techniques?

Follow-up:
One of the most notable sophisticated cryptography techniques is the elliptic curve cryptography,
which is based on yet another branch of mathematics (also related to number theory) to perform
encryption and decryption.

Comment 6:

Public key cryptography. RSA algorithm?

Follow-up:
We have already worked extensively on this in Tutorial 5.


Comment 7:

Difference between public key and symmetric key cryptography.


Follow-up:
The most important difference is, NOT the strength, but the way keys are distributed/shared.

Saturday, November 12, 2011

Greedy algorithm, Google Map vs Map Quest, WiFi vs 3G

Comment 1:

Are there other algorithms to find the shortest path that use recursion or dynamic programming?

Follow-up:

Yes there is one called Floyd-Warshall algorithm that uses dynamic programming to find all-pairs
shortest paths. This algorithm can also handle graphs with negative link weights.

Comment 2:

What happens in the shortest path greedy algorithm (i.e., Dijkstra) when two distances surrounding the node are equal?

Follow-up:

Good observation. We will usually just “randomly” choose one to break the tie.

Comment 3:

Any technical and systematic ways to calculate the time-complexity of an algorithm?

Follow-up:

Yes sure. For more complicated situation, we usually end up having a bunch of summations of series in the counting of number of key steps. Then, we will need to use some mathematical tools to obtain closed-form expressions. These are computer science topics, though.

Comment 5:

If 3G and Wi-Fi on my smartphone are similar things, why do I notice a significant difference between the speed of loading the same page on 3G and Wi-Fi?

Follow-up:

3G and Wi-Fi are similar in that they are both wireless communication technologies. But the similarity ends there. The wireless communication mechanisms used are highly different in many aspects. For example, 3G is based on cellular communication which is designed for longer range and thus, speed is lower (as electromagnetic signals deteriorate significantly over a distance). Wi-Fi is of a much shorter range and can therefore afford to provide a higher speed. There are many other technical differences, which are topics of a wireless communication and networking course.

Comment 6:

The “shortest path algorithm” can be demo-ed on 9 slides with animation instead of using just one slide. It is a bit too small.

Follow-up:

Thanks for the comment! You are right! Will improve this. Sorry for the inconvenience.

Comment 7:

How come MapQuest/Google-Map is so fast on a map that has billions of nodes?

Follow-up:

One trick is that MapQuest/Google-Map generally does not do the computations “on-demand”, i.e., they pre-computed many routes which are then stored in the database. When someone posts a query, majority of the routes are pulled out from the database.

Comment 9:


How to resolve the conflict when there is negative weight in the graph when using Dijkstra’s
algorithm?

Follow-up:

Dijkstra’s algorithm fails when there is negative weight in the graph. We will need to use a different technique, e.g., the Bellman-Ford algorithm. See also Comment 1 above.

Comment 10:

You have mentioned that researchers in the field of computer try to giving everything an unique ID (a unique IP address), for example, a microwave oven. However, I don’t really understand why we are going to do so. What are the purposes? If that applies widely, will there be any privacy problems?

Follow-up:

Yes sure I believe there will be significant privacy problems! Maybe you can consider using this as your survey topic?

Comment 11:

How do we obtain (e + n log n) for Dijkstra’s algorithm?

Follow-up:

A rough sketch as follows: We need to examine all the links (during the updating of the estimated distances labelled on the nodes) so that is why we have the e term. We need to sort the nodes in an increasing order of estimated distances and that is why we have the n log n term.

Comment 12:

Why greedy approach usually results in a fast algorithm?

Follow-up:

This is because as we make a greedy choice in each step, we reduce the problem size by 1. Thus, after making n greedy choices (i.e., n steps), we will finish the problem. Consequently, we usually end up having an algorithm that takes O(n) time, together with the time needed in some pre-processing (e.g., sorting, which takes another n log n time).

Comment 13:

Knapsack problem. It can e applied to daily life. Is it similar to linear programming taught in Maths at cert level?

Follow-up:

Yes it is similar. Wait until Tutorial 3 to find out more details.

Comment 14:

Dynamic programming is a bit difficult to understand. Also the DNA sequence example.

Follow-up:

The theme of Tutorial 3 is about dynamic programming. Hope you would feel better after going
through this tutorial cycle. The DNA sequence example is interesting and I think you can understand at least 70% of it by studying the Web page mentioned in class.

Comment 15:

I am a bit confused of the graph that represent the different representation of running time that results in different value.

Follow-up:

I think you are talking about the “estimated distances” labelled on the nodes in the example graph for showing how Dijkstra’s algorithm works. Those values are NOT running time (of an algorithm). Those are used for representing, for example, the time it takes to travel from one place to another geographically (when we use the graph to represent a map).

Comment 16:


Why “99” is used to represent the information that has not been processed?

Follow-up:

Yes in computer programming we usually use such tricks to ease processing.

Comment 18:

Google-Maps are more developed than MapQuest these days. For example, it can alert that friends are located close by to you.

Follow-up:

Thanks!

Comment 21:

I am not quite clear about how two determinant factors will affect the efficiency of greedy approach.

Follow-up:

The main factors are the “greedy choices”. If you pick the right greedy choice at each step, the
algorithm would end up giving “optimal” result. In terms of speed, please refer to Comment 12 above.

Thursday, November 10, 2011

Recursion, randomization, sorting and computations

Comment 1:

Divide-and-conquer vs. recursion?

Follow-up:

Divide-and-conquer is a general technique in which we divide the problem into small parts and then solve the smallers independently. Recursion is closely related to divide-and-conquer in that it is usually the most concise way to express a divide-and-conquer idea. However, a divide-and-conquer idea does not always need to be realized by using recursion. Indeed, sometimes, we would like to avoid recursion because it can be very slow, as you have seen (or will see) in Tutorial 1.

Comment 2:

Randomization?

Follow-up:

Let us consider a simple problem in order to illustrate the usefulness of randomization. This problem is about several important but related concepts: worst-case analysis, average-case
analysis, and probabilistic analysis. Consider the following “hiring” algorithm:

1) Set candidate best to be unknown;

2) For each of the n candidates, do the following:

3) Interview candidate i ;

4) If candidate i is better than candidate best, then hire candidate i and set best to be i ;

Assume that interviewing each candidate has a cost of c In and hiring a candidate has a cost of c H

(where c H > c In under normal circumstances).

(a)

Can you give the worst case total cost of the above hiring algorithm?

(b)

Assume that the candidates come to the interview in a random order, i.e., each candidate is
??? equally likely to be the best. Specifically, candidate i has a probability of -- to be the best
among the first i candidates. Can you give the average case total cost of the above hiring
algorithm?


Hint: You can consider to use the “indicator random variable” X i , which is equal to 1 if

candidate i is hired and 0 otherwise. Hence, the average number of candidates that are actually

hired is equal to the “expected value” (i.e., the average) of ∑ X i .

Answers:

(a)

The worst case is that every interviewed candidate is hired. Thus, the total cost is: c In n + c H n .

(b)

In this average case, the only change to the total cost is the hiring part. So let’s focus just on this
part. Specifically, as given in the Hint, the average number of candidates that will be hired is:

??? which in turn is equal to: ∑ -- . A good bound for this sum is: log n . Thus, the average
case hiring cost is just c H log n , which is much smaller than the worst case’s hiring cost.
The lesson we learn is that average case is sometimes much better than the worst case.

As you can see, it is helpful to assume that all permutations of the input are equally likely so that a probabilistic analysis can be used. Now, here is the power of randomization—instead of assuming a distribution of inputs (i.e., the candidates), we impose a distribution. In particular, before running the algorithm, we randomly permute the candidates in order to enforce the property that every permutation is equally likely. This modification does not change our expectation of hiring a new person roughly log n times. It means, however, that for any input we expect this to be the case, rather than for inputs drawn from a particular distribution.

Comment 3:

Quantum computing? Parallel processing? Biological computing?

Follow-up:

These are really exortic computing models that we will elaborate on in the later part of the course. Please be patient. Thanks!

Comment 4:

There are a lot of different mathematical ways of documenting/calculating numbers/codings. How come only a few can be applied to computing processing algorithms?

Follow-up:

Good point. But please note that not many mathematical ways of calculations can be realized, in a mechanical manner, using a computing procedure (i.e., to be carried out by a computer). For instance, think about integration in calculus, there are many integration problems that need very good “inspection” or insight to solve. Agree?

Comment 5:

Insertion sort?

Follow-up:

Please find the sketch of a computing procedure using insertion sort below.

(1)Given a list of numbers A[1], A[2], ..., A[n]
(2)for i = 2 to n do:
(3)move A[i] forward to the position j <= i such that
(4)A[i] < A[k] for j <= k < i, and
(5)either A[i] >= A[j-1] or j = 1

Now, it is not difficult to see that the number of checkings/swappings in lines (3) to (5) above cannot be larger than i . Thus, the total number of steps, i.e., the estimated running time, would be ??? , i.e., on ??? the order of n .

Comment 6:

Quicksort? Randomization?

Follow-up:

The Quicksort algorithm looks very similar to the algorithm that you have worked (will work) on in
Problem 3 of Tutorial 1 (about “searching”). So I leave this to you to write up the computing
procedure. You can also prove that the estimated running time is n log n .

On the other hand, I would like to supplement a bit more about the “randomization” part used in
Quicksort. Similar to the “hiring problem” in Comment 2 above, we need a certain “distribution” in the input list so as to realize the potential of Quicksort (or, divide-and-conquer, for that matter).
Specifically, in Quicksort, we would like to choose a pivot so that the resulting two partitions are of more or less equal size. It is reasonable to assume that if the list is somehow “totally random” (we will talk more about generating randomness later on), then it is likely that a randomly selected number from the list has a value right in the middle, i.e., it will divide the list into two equal halves. So just like the hiring problem, we will randomly shuffle the list before sorting and then, statistically, we would expect the list to be divided into equal halves when we partition it.

Comment 7:

P2P should be discussed/elaborated.

Follow-up:

We will spend some time discussing about P2P systems in later part of the course. Please be patient.
Thanks!

Comment 8:

We talked about our actions being monitored, even in P2P because we are accessing the Trackers. But what about the ISPs? They track everything we do. What about VPN (virtual private network)? Can it prevent ISPs from tracking us?

Follow-up:

Yes it is true that the ISPs are keeping track of our moves all the time. So when the law enforcement people need the information (with warrant), they will supply it. Even VPN (i.e., setting up the so-called private links, in the form of encrypted channels) cannot help because ultimately your IP address has to be revealed. Only the data can be encrypted. We will discuss more about Internet security and privacy in later part of the course.

Comment 9:

Feasibility of parallel processing? For example, in the Tower of Hanoi problem we are limited by the number of pegs and the rules of the game.

Follow-up:

Yes you are right. How to do things in parallel in a computer has been baffling researchers for decades.

We will discuss more about these difficulties later in the course.

Comment 10:

Isn’t it true that “recursion” is something just like mathematical induction?

Follow-up:

Yes you are absolutely right! Very good observation. Indeed, recursion, or even divide-and-conquer, is closely related to the “induction” concept. We try to “extrapolate” solutions of smaller problems to larger ones. That is the idea.

Comment 11:

The CPUs are evolving nowadays. Their computational speeds increase exponentially and this lowers the significance of the effectiveness of one algorithm to solving the problem as the CPUs can carry out the tasks equally fast and well. But still thinking of an effective algorithm is still challenging and worth continuing.

Follow-up:

Oh this one I cannot agree with you. Indeed, as you will find out in this course and we will also discuss in more detail soon, there are some problems that cannot be solved practically without a smart algorithm, even if you have thousands of processors at your service.

Thursday, September 15, 2011

The nature of computing

Comment 1:

Artificial intelligence (AI)?

Follow-up:

AI is a big topic. We will explore some aspects of AI in later parts of the course. However, one thing that is useful for us now is: How would you define “intelligence”? Is playing chess an intelligent action?

Comment 2:

Weather forecasting?

Follow-up:

Weather forecasting is a grand challenge in the computing field. The reason is that the calculations involve using thousands of variables and complicated differential equations. Furthermore, the time line has also to be finely divided (e.g., in units of milli-seconds) in order to come up with accurate results. Thus, weather forecasting requires great amount of computing power. We will talk about the kind of computing power it needs. Moreover, we will pay a visit to the HK Observatory later to take a look at the machines used there.

Comment 3:

Playing football by robot is meaningful? I personally think that playing football is just a small step, and this small step will contribute a lot in the later development in robot design.

Follow-up:

Good point! Thank you!

Comment 4:

Would like to know the extent of technical knowledge needed/will be taught in the course.

Follow-up:

We will try our best to balance the need of technical knowledge versus the usefulness of the materials taught in the course. Your comments and criticisms are mostly welcome!

Comment 5:

I major in linguistics and I think speaking a language can be regarded as computing.

Follow-up:

Yes sure you are right! Very insightful comment!

Comment 6:

Google and Apple are not the only companies doing searches and smart-phones, respectively. Should not single out these companies.

Follow-up:

Thanks for the comment! I just thought that they are “representatives” in their industries and the names “Google” and “Apple” nowadays have become house-hold names. Yet you are right in that we should not pay too much focus on them, with respect to the technologies that they use.

Friday, July 22, 2011

Digital signature

Comment:

Still confused about “digital signature”. In particular, can a “digital signature” be stolen?

Follow-up:

Digital signature is just an encrypted output of a message (or actually the digest of the message). The encryption is performed by using the private key of the signer. Thus, the signer cannot deny having signed the message because only he/she has the private key. Furthermore, every “signature” is unique in that the encrypted outputs of different messages are different.

Thus, a “digital signature” cannot be stolen.

Cracking using quantum computing

Comment:

What if quantum computing cracks the theory of public key approach?

Follow-up:

Theoretically, based on Peter Shor’s factorization algorithm, a quantum computer can break a public key cryptosystem in very short time. Thus, public key cryptosystem as we are using it now will become completely useless.

The critical problem of the current public key cryptosystem (the “RSA” algorithm) is that it relies on computational intractability. In other words, its security is not mathematically proven.

Security scientists are actively researching on “provable” public key approaches.

Certificate authority

Comment:

What if the certificate authority discloses private information of users? Safe?

Follow-up:

Yes it is definitely possible that a CA could inadvertently discloses some private information. For example, a CA’s database could be compromised by some hackers, just like the credit-card numbers are disclosed in some hacking of e-business Web sites.

So in a sense, one could argue that our Internet security is quite fragile.

Comment:

Who will certify the public key of the certificate authority?

Follow-up:

A CA’s public key is not “certified” but just published in a widely accessible site so that everyone can verify it.

Are malware and computer virus related to Internet security?

Comment:

Does it mean that we should not use some not-so-famous browser to surf the net because those will lead to risky result?

Follow-up:

Yes definitely. In fact, we should all be very careful in using any software in nowadays computing environment because of the Internet. Specifically, a malicious software (called malware) can extract useful information (e.g., bank statements, etc.) from your computer and then send them off to some remote computers (e.g., the hackers’ computers) for launching further attacks.

We should not trust certificates from unknown sources/companies. We should not trust the “judgment” made by the browsers, which might have been altered by some malwares.
Comment:

How about computer virus? (Is computer virus related to Internet security?)

Follow-up:

Computer virus, like “malware” I mentioned above, is highly related to Internet security in the sense that nowadays computer virus is not about making fun of an innocent user (as in the past). Instead, a virus will be used for getting sensitive information and/or using your computer to launch further attacks to others (e.g., Distributed Denial-of-Service attacks, like attacking popular Web sites such as Amazon.com).

Most importantly, a computer virus spreads by using emails or some bogus links in a Web page.

So we have to be careful in opening email attachments and clicking on some Web links.

Is password enough for authentication?

Comment:

Is password enough for authentication? Any other ways for the authentication (like USB keys)? What else can I do to enhance my Internet security?

Follow-up:

Password is definitely not enough. But we do not know what really is enough. Remember the weakest link concept? Thus, to enhance security, we really have to do as much as we can, without compromising efficiency or convenience too much. So it is true that in many situations, some more security measures, e.g., using a hardware key, to couple with password for authentication.

Biometrics (e.g., retina scan, finger-prints, etc.) are also commonly used as further information for authentication.

Quantum computer

Comment:

What makes a quantum computer impractical?

Follow-up:

For now, the greatest difficulty is in the implementation of a quantum circuit that is stable. In fact, the problems related to accurately and reliably measuring the quantum bits, namely de-coherence and error correction, are still very difficult to solve in practice, even if we assume we can deploy tremendous amount of resources at the construction of a quantum computer.

Still remember the chaotic theory? It is also anticipated that if we cannot tackle the accurate measurement problem effectively, it would be impossible to scale up the size of a quantum circuit without leading to enormous errors.

Skype system

Comment:

Why the SuperNode in a Skype network can boost the speed of the system?

Follow-up:

Sorry for causing this confusion!

In fact, each SuperNode is mainly responsible for routing calls for other users. In the routing mechanism, sometimes a SuperNode needs to serve as an agent on behalf of another Skype user who is behind a firewall or an NAT router (very commonly used at home or in an organization). Thus, each SuperNode is really helping to enhance the “reachability” of the Skype network, rather than the speed of the system.

Comment:

How can the Skype system enable “computer-to-telephone” calls?

Follow-up:

Good question.

Specifically, there are some Skype servers (not shown in the diagram in the presentation slide) that have connections to some PSTN (Public Switched Telephone Network) gateways. Thus, through such servers we can route our VoIP (voice over IP) calls to reach a landline telephone or a cell phone. As this directly consumes resources in the Skype network (owned and operated by the Skype company), the caller and the callee both need to pay some money to make this happen.

Thursday, July 21, 2011

Sequential processing

Comment 3:

What exactly is “sequential processing”?

Follow-up:

Sequential processing refers to those computations that can only be handled by one single processor. Putting in more processors does not help. For example, in a typical parallel or multi-task program, there are some “managerial” tasks to be carried out. One most commonly encountered task is the division of the whole job into pieces. Specifically, if we consider an image processing task, dividing up the image into pieces is one such managerial task. Obviously, this action is not parallelizable.

Can you think about some daily life examples of sequential processing task?

Parallel processing: data parallel and functional parallel

Comment:

Differences between data parallel and functional parallel approaches?

Follow-up:

Data parallel approach is suitable for situations where exactly the same actions can be taken for different pieces of data. A very good example is filtering or quantization in image processing because these tasks require operating on each pixel independently for the whole image. Another daily life example that we discussed in class is the painting of a large wall (in the same color!) by a group of workers in parallel.

Functional parallel approach is suitable for situations where different actions can be taken on the data (possibly the same) independently and simultaneously. A very good example is the recognition of different objects in the same image. Here, we apply different “actions”, which are the matching of different known object models, over the same image. These actions are independently usually and are different because matching different models require different kinds of processing. Another daily life example that we discussed in class is the re-modeling of an apartment — we usually need to do many different things such as installation of electrical wirings, painting of walls, installation of fixtures, etc. These functions (sometimes) are independent and can be carried out in parallel.

Finally, one very important difference between the two approaches is the “degree of parallelism” — which can be defined as the number of processors that you can use in a productive manner (i.e., having some useful work to do). Obviously, in data parallel approach, if the data involved is large (e.g., a very large image), then we can use many processors (e.g., up to thousands) to enhance the speed-up. On the other hand, usually there will not be many different (and independent) functions required in a task (e.g., re-modeling of an apartment). So the number of processors that can be put into productive use is usually limited (e.g., on the order of 10 or lower).

Think about “weather forecasting” computing. Which approach do you think is more suitable?

Comment:

Functional parallel computing can only be used when the different functions are independent of each other. So, for example, in the case of “object tracking”, as the different functions are actually dependent on each other, i.e., the object tracking can only be done after background subtracting, so the functions need to be done one after another. Hence, functional parallel computing could not be used in this case. In a sense, this is “completely sequential”?

Follow-up:

This is an excellent observation.

Using optimal method or approximation method?

Comment:

How do we know when to apply an optimal method and when to use an approximation method instead?

Follow-up:

Generally we want to solve a problem fast. Thus, if there is a fast optimal method (for a more precise notion of “fast”, please refer to Homework 5), then we should just use it. Now, a possibly difficult situation arises when we do not have a fast optimal method. For example, the optimal method requires exponential running time. In such a situation, we might need to forgo optimality but use a fast method that will give us an answer, possibly not optimal, but within a short period of time. For example, the “touring” (i.e., gossiping) problem is usually solved by a sub-optimal but fast method.

Time taken for the gossip to come back

Comment:

Time taken for the gossip to come back?

Follow-up:

A circular gossiping loop is a round trip tour visiting each and every node once. The cost of such a tour is defined as the total costs summing up all the link costs. Each link cost can represent the time needed for the gossip message to travel from one node to another node.

Comment:

Why is there no fast algorithm for a round trip tour where each link and node is visited once?

Follow-up:

That is life, I guess. There are in fact many such seemingly simple problems that do not have a fast optimal algorithm for them yet.

High tech, low tech and "no" tech

Comment:

I would like to know clearer distinctions among high, low, and no tech. In which category are the game consoles (e.g., NDS, X-Box, and PS2) classified?

Follow-up:

In my opinion, high tech means the technology has truly ingenious component so that it is very difficult for others to re-invent. Of course, when the technology is being used, others can somehow “reverse” engineer the underlying mechanisms and then possibly copy it. Thus, usually such “high tech” solutions are usually patented to prevent such copy-cats from “stealing” the treasures! Examples of high tech are: micro-chip fabrication, Google search, Skype, NVidia Graphics Processing, etc.

On the other hand, to me, low tech means that the technology is not really that difficult to design and implement. Nevertheless, low tech does not mean unsuccessful product. Usually the reverse is true. I think the reason is that while the underlying technology might not be difficult, the “application” of the technology might be pioneering (i.e., a first mover) so that users are accustomed to using it.

No tech, again in my judgment, is something that does not really involve much technology but just marketing and sales strategies, which are extremely important in a business, if not the most important components.

In summary, I think game consoles are really high tech stuffs because they are really “high performance” hardware built based on really difficult designs. We will talk a bit more about these hardware later on.

Computations in Google Map

Comment:

Why Google cannot store the complete map easily? For instance, for 1000 nodes, we need to store just around 500 × 999 units of data.

Follow-up:

While the analysis for 1000 nodes is correct, when you scale it up to 1 million nodes, the data storage becomes very big. Indeed, the file size of the street map for the U.S.A. is around 9GB (for details, please visit: www.esri.com). Thus, for the entire world with all the details (e.g., restaurants, shops, timing data, etc.), the data size would be even larger. Of course, Google still can “easily” handle all these. But to run an algorithm (e.g., Dijkstra) over such a large piece of data is still quite daunting. That is why we really have to use the “pre-compute-then-table-lookup” approach as far as we can.

Comment:

What kind of programming language is used for implementing MapQuest and Google Map?

Follow-up:

For high-performance applications, I strongly believe that their engineers are still relying on the C language. However, for other system components such as user interface, Web, etc., they use a variety of other tools (e.g., Java, Perl, Python, etc.).

Principles of the travel scheduling problem

Comment:

Underlying principles of the travel scheduling problem?

Follow-up:

Frankly speaking we also do not know the complete picture of this problem. Yet from an intuitive angle, the problem enlarges the complexity of the original shortest path problem in at least two dimensions. Firstly, we need to solve multiple instances of the shortest path problem for any single round-trip tour (e.g., one day in the trip). Secondly, we need to combine these shortest path solutions temporally across different tours so as to obtain the minimum overall cost for the whole trip.

Chaotic systems

Comment:

Chaotic systems?

Follow-up:

In simple terms, a chaotic system is one which is highly sensitive to initial conditions, and whose seemingly random behaviors can be actually described by deterministic equations. For more details, please visit:

http://en.wikipedia.org/wiki/Chaos_theory (for a general introduction)
http://en.wikipedia.org/wiki/Logistic_map (for more details about a chaotic system defined by a very simple recurrence equation); Reference [1] in http://www.eee.hku.hk/~ykwok/CCST9003/reading-others.html also has an excellent discussion

Random number generation

Comment:

I want to know more about random number generation.

Follow-up:

Please visit the following sites:

http://www.random.org/randomness/
http://www.fourmilab.ch/hotbits/