Showing posts with label graph. Show all posts
Showing posts with label graph. Show all posts

Sunday, July 24, 2011

Communication networks design using graph problem

Problem:

In addition to finding shortest paths, sometimes we are also interested in finding the set of links connecting all the nodes with the minimum cost. A real life application of this idea is in the design of communication networks. Here, the nodes of the graph represent sites and the links represent the communication connections between sites. The cost of each connection could be the communication delay. Using the following graph as an example, can you design an algorithm similar to the Dijkstra’s algorithm in solving this problem?

Follow-Up:

To solve this problem using a greedy method, we can use the Kruskal’s algorithm, outlined below.
  1. Sort all links in ascending order of costs
  2. Start with an empty set of links, so that all nodes are disconnected (isolated)
  3. Add the lowest cost link to the set to connect a disconnected node without creating a cycle; this link is not to be considered again
  4. Repeat Step (3) until the set has n - 1 links

Further Inquiry:

(a) What is the estimated running time of the above algorithm?
(b) If there are links with negative costs, would this greedy method still work?

Tuesday, July 19, 2011

HW: "Hard" and "Unsolvable"

Problem (20 points):

This problem is aimed at enhancing your understanding of “hard” and “unsolvable” problems.

In 1965, computer chip pioneer Gordon E. Moore noticed that transistor density in chips had doubled every year in the early 1960s, and he predicted that this trend would continue. This prediction, moderated to a doubling (in transistor count) every 18 months, is known as the Moore’s Law. Often times, people extend the meaning of Moore’s Law to imply that computer processing speed doubles every 18 months.

(a) According to the speed-doubling result of Moore’s Law, how many times a computer today is
faster than one ten years ago?

(b) Suppose we have a computing problem that can be solved in estimated running time on the order of n , where n is the problem size (e.g., the number of nodes in a graph). Using the same amount of time, how much bigger a problem we can solve with a computer today than with one ten years ago?

(c) Suppose we have a computing problem that can be solved in estimated running time on the order of (n^2) , where n is the problem size (e.g., the number of nodes in a graph). Using the same amount of time, how much bigger a problem we can solve with a computer today than with one ten years ago?

(d) Suppose we have a computing problem that can be solved in estimated running time on the order of (n^6), where n is the problem size (e.g., the number of nodes in a graph). Using the same amount of time, how much bigger a problem we can solve with a computer today than with one ten years ago?

(e) Suppose we have a computing problem that can be solved in estimated running time on the order of (2^n), where n is the problem size (e.g., the number of nodes in a graph). Using the same amount of time, how much bigger a problem we can solve with a computer today than with one ten years ago?

(f) For Problem 3 of Tutorial 3, someone suggested that we might solve the problem (i.e., finding
integer roots for the equation) by simply testing every possible integers. Please comment.

HW: Dynamic programming

Problem (10 points):

In Tutorial 3 Problem 2, we worked on the knapsack problem in which there is only one unit of every item i. Here we consider the version where there is an unlimited supply of every item. In this version, we can consider a dynamic programming formulation for computing this quantity: K(x), which is defined as the maximum value achievable with a knapsack of capacity x.

(a) Please try to give a recurrence for K(x)?

(b) Please try to write up the computing procedure for K(x).

(c) What is the estimated running time (in terms of n and W)?

(d) Please draw a graph (with directed links) to illustrate the computation (just like the one I showed you in Tutorial 3 and in Tutorial 1 follow-up). How would you describe the computation using this graph?

Sunday, July 17, 2011

Dijkstra's shortest path problem

Problem:

We learn about the Dijkstra’s shortest path algorithm in class. Try to apply it to the following graph.



What is the shortest path between s and b?

Follow-Up:

Using Dijkstra and starting from s, the shortest path found is s-a-b. Yet we can easily see that the shortest path should be s-a-c-b. Thus, we can see that greedy method does not always work. Here, Dijkstra is defeated by the existence of links with negative costs.

However, as some of you discovered during the tutorial, the correct shortest path can be found by starting the Dijkstra work from the destination, i.e., b. Please try this yourself.

Indeed, in real-life usage (e.g., in software systems such as MapQuest), a bi-directional approach is used, i.e., the Dijkstra work is done concurrently from both the starting point and the destination. Conflicts are resolved on the way to construct the final shortest path.

Further Inquiry:

Consider that the link costs are multiplied together (e.g., when the links are associated with probabilities) in a path, would Dijkstra still work?