Thursday, November 24, 2011

Analysis on Angry Bird

Group 17
Chiu Kin Kwan, Yu Sheung Hung, Cheung Pik Ying, Jackson Cheung
HKU CCST9003 FA11


Wednesday, November 23, 2011

Presentation: Facebook: Technologies and Societal Impact

Ho Ka Chun Kelvin, Li Ho Hin, Kelvin, Lo Ka Ming Terrence, Sze Chi Chun, Carrie, Wong Ka Yan, Karen HKU CCST9003 FA11

Tuesday, November 22, 2011

Presentation: Operating Systems

Group 7
Wong Ching Yat, Yau Cheuk Hang, Yu Miaoxia, Pau Wing Hong, Wong Lok Kwan
HKU CCST9003 FA11

Monday, November 14, 2011

Security and cryptography

Comment 1:

Security is a “system” concept.

Follow-up:
Yes it is very important for us (the users) to understand this so that we will not have a false sense of security when we are “educated” that our data are encrypted. Now you know that data encryption is just part of the whole process. Anything goes wrong in other parts of the system, security cannot be promised.

Comment 2:

HTTPS protocol?


Follow-up:
This is the so-called “secure” version of the HTTP protocol. Basically, this protocol transports
encrypted data instead of sending data in plaintext. The data is usually encrypted using a symmetric key system for which the shared key has to be agreed using a public key approach. Please refer to Problem 3 of Tutorial 5 for the design of such a key set-up protocol.

Comment 3:

Stealing bank account information from the Internet?


Follow-up:
Yes whether you like it or not, this kind of things are believed to be happening all the time! The thing is it is not very difficult to identify a “weakest” link in the system (e.g., a particular e-commerce Web site). It is widely believed that after such a system is broken, the hacker will not just use the bank account information (e.g., for buying things) but he/she will hold the bank and/or the e-commerce Web site for ransoms.

Comment 4:

What is symmetric key cryptography?


Follow-up:
Symmetric key system has always been the most important way for data confidentiality, despite that public key system is shown to be more versatile and “strong”. The reason is that symmetric key algorithms are usually much much faster than the public key algorithms. In a typical symmetric key system, a shared key has to be agreed upon through some means (see Comment 2 above). Then, the communicating parties will use the shared key for doing encryption/decryption.

Comment 5:

Are there any more sophisticated cryptography techniques?

Follow-up:
One of the most notable sophisticated cryptography techniques is the elliptic curve cryptography,
which is based on yet another branch of mathematics (also related to number theory) to perform
encryption and decryption.

Comment 6:

Public key cryptography. RSA algorithm?

Follow-up:
We have already worked extensively on this in Tutorial 5.


Comment 7:

Difference between public key and symmetric key cryptography.


Follow-up:
The most important difference is, NOT the strength, but the way keys are distributed/shared.

Saturday, November 12, 2011

Greedy algorithm, Google Map vs Map Quest, WiFi vs 3G

Comment 1:

Are there other algorithms to find the shortest path that use recursion or dynamic programming?

Follow-up:

Yes there is one called Floyd-Warshall algorithm that uses dynamic programming to find all-pairs
shortest paths. This algorithm can also handle graphs with negative link weights.

Comment 2:

What happens in the shortest path greedy algorithm (i.e., Dijkstra) when two distances surrounding the node are equal?

Follow-up:

Good observation. We will usually just “randomly” choose one to break the tie.

Comment 3:

Any technical and systematic ways to calculate the time-complexity of an algorithm?

Follow-up:

Yes sure. For more complicated situation, we usually end up having a bunch of summations of series in the counting of number of key steps. Then, we will need to use some mathematical tools to obtain closed-form expressions. These are computer science topics, though.

Comment 5:

If 3G and Wi-Fi on my smartphone are similar things, why do I notice a significant difference between the speed of loading the same page on 3G and Wi-Fi?

Follow-up:

3G and Wi-Fi are similar in that they are both wireless communication technologies. But the similarity ends there. The wireless communication mechanisms used are highly different in many aspects. For example, 3G is based on cellular communication which is designed for longer range and thus, speed is lower (as electromagnetic signals deteriorate significantly over a distance). Wi-Fi is of a much shorter range and can therefore afford to provide a higher speed. There are many other technical differences, which are topics of a wireless communication and networking course.

Comment 6:

The “shortest path algorithm” can be demo-ed on 9 slides with animation instead of using just one slide. It is a bit too small.

Follow-up:

Thanks for the comment! You are right! Will improve this. Sorry for the inconvenience.

Comment 7:

How come MapQuest/Google-Map is so fast on a map that has billions of nodes?

Follow-up:

One trick is that MapQuest/Google-Map generally does not do the computations “on-demand”, i.e., they pre-computed many routes which are then stored in the database. When someone posts a query, majority of the routes are pulled out from the database.

Comment 9:


How to resolve the conflict when there is negative weight in the graph when using Dijkstra’s
algorithm?

Follow-up:

Dijkstra’s algorithm fails when there is negative weight in the graph. We will need to use a different technique, e.g., the Bellman-Ford algorithm. See also Comment 1 above.

Comment 10:

You have mentioned that researchers in the field of computer try to giving everything an unique ID (a unique IP address), for example, a microwave oven. However, I don’t really understand why we are going to do so. What are the purposes? If that applies widely, will there be any privacy problems?

Follow-up:

Yes sure I believe there will be significant privacy problems! Maybe you can consider using this as your survey topic?

Comment 11:

How do we obtain (e + n log n) for Dijkstra’s algorithm?

Follow-up:

A rough sketch as follows: We need to examine all the links (during the updating of the estimated distances labelled on the nodes) so that is why we have the e term. We need to sort the nodes in an increasing order of estimated distances and that is why we have the n log n term.

Comment 12:

Why greedy approach usually results in a fast algorithm?

Follow-up:

This is because as we make a greedy choice in each step, we reduce the problem size by 1. Thus, after making n greedy choices (i.e., n steps), we will finish the problem. Consequently, we usually end up having an algorithm that takes O(n) time, together with the time needed in some pre-processing (e.g., sorting, which takes another n log n time).

Comment 13:

Knapsack problem. It can e applied to daily life. Is it similar to linear programming taught in Maths at cert level?

Follow-up:

Yes it is similar. Wait until Tutorial 3 to find out more details.

Comment 14:

Dynamic programming is a bit difficult to understand. Also the DNA sequence example.

Follow-up:

The theme of Tutorial 3 is about dynamic programming. Hope you would feel better after going
through this tutorial cycle. The DNA sequence example is interesting and I think you can understand at least 70% of it by studying the Web page mentioned in class.

Comment 15:

I am a bit confused of the graph that represent the different representation of running time that results in different value.

Follow-up:

I think you are talking about the “estimated distances” labelled on the nodes in the example graph for showing how Dijkstra’s algorithm works. Those values are NOT running time (of an algorithm). Those are used for representing, for example, the time it takes to travel from one place to another geographically (when we use the graph to represent a map).

Comment 16:


Why “99” is used to represent the information that has not been processed?

Follow-up:

Yes in computer programming we usually use such tricks to ease processing.

Comment 18:

Google-Maps are more developed than MapQuest these days. For example, it can alert that friends are located close by to you.

Follow-up:

Thanks!

Comment 21:

I am not quite clear about how two determinant factors will affect the efficiency of greedy approach.

Follow-up:

The main factors are the “greedy choices”. If you pick the right greedy choice at each step, the
algorithm would end up giving “optimal” result. In terms of speed, please refer to Comment 12 above.

Friday, November 11, 2011

Brief introduction about Wolfram Alpha

Pau Wing Hong
Wolfram|Alpha (also known as Wolfram Alpha) is more than a search engine like Google. Instead, it is an answer-engine/computational engine developed by Wolfram research. Traditional search engines like Google and yahoo, which are only capable of providing a list of links to information, don’t answer questions. They only take your keywords at face value and don’t always yield good results. What really makes Wolfram Alpha shines is that it can compute, just like a calculator. It computes solutions and responses from a structured knowledge database.

Since the day it starts, the Wolfram Alpha Knowledge Engine contains 50,000 types of algorithms and models and over 10 trillion pieces of data. It is still in development, and is always accumulating new information to its database. As Wolfram|Alpha is running on 10,000 CPUs with Mathematica running in the background, it is capable of answering complicated mathematical questions.

The service is built on four basic pillars: a massive amount of data, a computational engine built on top of Mathematica, a system for understanding queries and technology to display results in interesting ways. Wolfram Alpha is also able to answer fact based questions such as “When did Steve Jobs die?” It displays its response as date, time difference from today and Anniversaries for October 5, 2011.

There are a number of things that Wolfram Alpha vastly different from Google. First of all, it is capable of answering complex queries. If complex search queries are typed into Google, it will get confused. This is because it can’t compute, unlike Wolfram Alpha. Just like a calculator, it does not care at all how many arguments are given to it. That’s why concatenating many arguments in a query often works extremely well. Apart from it, the answers and calculation from Wolfram Alpha is very accurate and precise. There is no need to worry about the validity of the information. Thirdly, two sets of data can be compared with graphs easily using Wolfram Alpha in which Google cannot.

Nevertheless, Wolfram Alpha does have its limitations. Since its answers are based on its own software and knowledge database. Wolfram Alpha can only answer a fact based question that has a specific answer. So it is not able to answer open ended questions like “Is Wolfram Alpha better than Google?”

As written in its main page, Wolfram Alpha’s goal is to bring deep, broad, expert-level knowledge accessible to anyone, anywhere, anytime. Clearly, the “Google Killer” is quite ambitious. However, in my opinion, Wolfram Alpha is not a typical search engine in essence. Therefore it is not a Google Killer as people might say, but it can be considered as a giant calculating encyclopaedia of statistics and facts. I think the site poses more of a threat to sites like Wikipedia.

Thursday, November 10, 2011

Recursion, randomization, sorting and computations

Comment 1:

Divide-and-conquer vs. recursion?

Follow-up:

Divide-and-conquer is a general technique in which we divide the problem into small parts and then solve the smallers independently. Recursion is closely related to divide-and-conquer in that it is usually the most concise way to express a divide-and-conquer idea. However, a divide-and-conquer idea does not always need to be realized by using recursion. Indeed, sometimes, we would like to avoid recursion because it can be very slow, as you have seen (or will see) in Tutorial 1.

Comment 2:

Randomization?

Follow-up:

Let us consider a simple problem in order to illustrate the usefulness of randomization. This problem is about several important but related concepts: worst-case analysis, average-case
analysis, and probabilistic analysis. Consider the following “hiring” algorithm:

1) Set candidate best to be unknown;

2) For each of the n candidates, do the following:

3) Interview candidate i ;

4) If candidate i is better than candidate best, then hire candidate i and set best to be i ;

Assume that interviewing each candidate has a cost of c In and hiring a candidate has a cost of c H

(where c H > c In under normal circumstances).

(a)

Can you give the worst case total cost of the above hiring algorithm?

(b)

Assume that the candidates come to the interview in a random order, i.e., each candidate is
??? equally likely to be the best. Specifically, candidate i has a probability of -- to be the best
among the first i candidates. Can you give the average case total cost of the above hiring
algorithm?


Hint: You can consider to use the “indicator random variable” X i , which is equal to 1 if

candidate i is hired and 0 otherwise. Hence, the average number of candidates that are actually

hired is equal to the “expected value” (i.e., the average) of ∑ X i .

Answers:

(a)

The worst case is that every interviewed candidate is hired. Thus, the total cost is: c In n + c H n .

(b)

In this average case, the only change to the total cost is the hiring part. So let’s focus just on this
part. Specifically, as given in the Hint, the average number of candidates that will be hired is:

??? which in turn is equal to: ∑ -- . A good bound for this sum is: log n . Thus, the average
case hiring cost is just c H log n , which is much smaller than the worst case’s hiring cost.
The lesson we learn is that average case is sometimes much better than the worst case.

As you can see, it is helpful to assume that all permutations of the input are equally likely so that a probabilistic analysis can be used. Now, here is the power of randomization—instead of assuming a distribution of inputs (i.e., the candidates), we impose a distribution. In particular, before running the algorithm, we randomly permute the candidates in order to enforce the property that every permutation is equally likely. This modification does not change our expectation of hiring a new person roughly log n times. It means, however, that for any input we expect this to be the case, rather than for inputs drawn from a particular distribution.

Comment 3:

Quantum computing? Parallel processing? Biological computing?

Follow-up:

These are really exortic computing models that we will elaborate on in the later part of the course. Please be patient. Thanks!

Comment 4:

There are a lot of different mathematical ways of documenting/calculating numbers/codings. How come only a few can be applied to computing processing algorithms?

Follow-up:

Good point. But please note that not many mathematical ways of calculations can be realized, in a mechanical manner, using a computing procedure (i.e., to be carried out by a computer). For instance, think about integration in calculus, there are many integration problems that need very good “inspection” or insight to solve. Agree?

Comment 5:

Insertion sort?

Follow-up:

Please find the sketch of a computing procedure using insertion sort below.

(1)Given a list of numbers A[1], A[2], ..., A[n]
(2)for i = 2 to n do:
(3)move A[i] forward to the position j <= i such that
(4)A[i] < A[k] for j <= k < i, and
(5)either A[i] >= A[j-1] or j = 1

Now, it is not difficult to see that the number of checkings/swappings in lines (3) to (5) above cannot be larger than i . Thus, the total number of steps, i.e., the estimated running time, would be ??? , i.e., on ??? the order of n .

Comment 6:

Quicksort? Randomization?

Follow-up:

The Quicksort algorithm looks very similar to the algorithm that you have worked (will work) on in
Problem 3 of Tutorial 1 (about “searching”). So I leave this to you to write up the computing
procedure. You can also prove that the estimated running time is n log n .

On the other hand, I would like to supplement a bit more about the “randomization” part used in
Quicksort. Similar to the “hiring problem” in Comment 2 above, we need a certain “distribution” in the input list so as to realize the potential of Quicksort (or, divide-and-conquer, for that matter).
Specifically, in Quicksort, we would like to choose a pivot so that the resulting two partitions are of more or less equal size. It is reasonable to assume that if the list is somehow “totally random” (we will talk more about generating randomness later on), then it is likely that a randomly selected number from the list has a value right in the middle, i.e., it will divide the list into two equal halves. So just like the hiring problem, we will randomly shuffle the list before sorting and then, statistically, we would expect the list to be divided into equal halves when we partition it.

Comment 7:

P2P should be discussed/elaborated.

Follow-up:

We will spend some time discussing about P2P systems in later part of the course. Please be patient.
Thanks!

Comment 8:

We talked about our actions being monitored, even in P2P because we are accessing the Trackers. But what about the ISPs? They track everything we do. What about VPN (virtual private network)? Can it prevent ISPs from tracking us?

Follow-up:

Yes it is true that the ISPs are keeping track of our moves all the time. So when the law enforcement people need the information (with warrant), they will supply it. Even VPN (i.e., setting up the so-called private links, in the form of encrypted channels) cannot help because ultimately your IP address has to be revealed. Only the data can be encrypted. We will discuss more about Internet security and privacy in later part of the course.

Comment 9:

Feasibility of parallel processing? For example, in the Tower of Hanoi problem we are limited by the number of pegs and the rules of the game.

Follow-up:

Yes you are right. How to do things in parallel in a computer has been baffling researchers for decades.

We will discuss more about these difficulties later in the course.

Comment 10:

Isn’t it true that “recursion” is something just like mathematical induction?

Follow-up:

Yes you are absolutely right! Very good observation. Indeed, recursion, or even divide-and-conquer, is closely related to the “induction” concept. We try to “extrapolate” solutions of smaller problems to larger ones. That is the idea.

Comment 11:

The CPUs are evolving nowadays. Their computational speeds increase exponentially and this lowers the significance of the effectiveness of one algorithm to solving the problem as the CPUs can carry out the tasks equally fast and well. But still thinking of an effective algorithm is still challenging and worth continuing.

Follow-up:

Oh this one I cannot agree with you. Indeed, as you will find out in this course and we will also discuss in more detail soon, there are some problems that cannot be solved practically without a smart algorithm, even if you have thousands of processors at your service.

Wednesday, November 9, 2011

Speech synthesis

Lau Kwan Yuen

In this day and age, a machine speaking to us is not a surprise. Stephen Hawking exactly uses the “speaking machine” to “speak” and communicate with the others. Even the mobile phone and electronic dictionary has this ability. The technology is called speech synthesis. This survey would focus on the description, principle and applications of speech synthesis.

Speech synthesis artificially produces the human speech. In 1950s, the first computer –based speech synthesizer was invented. It can be implemented in hardware or software. The computer system used for speech synthesis has several types.

The speech can be synthesized by linking up the speech that are recorded and stored in the database. In addition, a completely synthetic voice output can be created by using the model or characteristics of the human voice.

In 1968, the first text-to- speech (TTS) system was created. TTS system can convert normal language text into speech while some systems can change symbolic linguistic representations such as phonetic transcriptions into speech. This system has two parts, front-end and back-end. The front-end converts text and symbols into written form (text normalization). Moreover, the front-end can assign phonetic transcriptions to each of the words and break them into several phrases. On the other hand, the back-end then changes the phonetic transcriptions into sound.

In some specific application, the speech synthesizing quality needs to be higher. Actually, the quality of the speech synthesizer would be determined by the similarity to the voice of human and the ability to be understood. However, the more important qualities of the synthesizing system is the naturalness and intelligibility. Different syntheses have different features and usage. Some systems would be discussed.

One of the syntheses providing good naturalness is the unit selection. The reason is it applies only a little of the digital signal processing, which always produce unnatural sound, to the recorded speech. Moreover, some other system can smooth the waveform at the point of linkage by using some signal processing.

Another synthesis is the formant synthesis. It does not use any human speech sample to synthesize. In other words, the speech is totally artificial created by using additive synthesis and an acoustic model. The system can use the model to simulate the voicing, noise levels and fundamental, etc.

Various computer operation systems have adopted the speech system. For instance, the two very popular operating system, Apple iOS and Android, have added support for speech synthesis. For iOS used on the iPhone and iPad, VoiceOver speech synthesis is installed for accessibility for some kind of disabilities.

Since Microsoft Windows 2000, the Narrator, a text-to-speech utility for visual handicaps, was added. Also, the CoolSpeech program can be run in Windows to speak text from webpages and text documents.

Moreover, the speech synthesis systems are used in many different entertainment products. For example, some e-book can be read out by the speaker for convenience. It allows people with reading disabilities or visual impairments to listen to the words in the book so that they can enjoy the interest of reading books.

What is more innovative is the speech synthesis has been applied to the software called Vocaloid, which is a singing synthesizer application by Yamaha Corporation. This software lets the users to synthesize singing and create their own song by the virtual singer by typing in melody and lyrics. It uses synthesizing technology with specially recorded vocals of voice actor or singers. The software allows users to change the stress of the pronunciations, vibrato, dynamics and tone of the voice. The Vocaloid software is sold as “a singer in a box” being a replacement for a traditional actual singer. Therefore, the application of the synthesis system is not confined to the area for assistance.

To conclude, speech synthesis is a significant technology in our daily life. It does not only aid the people with visual or verbal impairments, but also bring us new experience of entertainment. The quality of synthesizing the human voice is expected to improve in the future, so that an artificial human voice can be more similar to the actual human voice. Then, the recognition of speech can be easier and more accurate.

References
1.    “What is Speech Synthesis?”
http://www.wisegeek.com/what-is-speech-synthesis.htm
2.    eSpeak: Speech Synthesizer
http://espeak.sourceforge.net/
3.    Speech Synthesis and Recognition
http://www.dspguide.com/ch22/6.htm
4.    vozMe - From text to speech (speech synthesis)
http://vozme.com/index.php?lang=en
5.    VOCALOID
http://www.vocaloid.com/

Tuesday, November 8, 2011

Leave the decisions to human beings, not machine

Kayson Wong

I love computers. Of the many reasons, one of them is that computers do not lie. It’s true. Computers are machines, they do not think, they have no feelings. Computers always give you the exact same output when given the exact same input. In fact, it’s all logic. 

What’s logic? Logic is something that everybody on this world agrees. You must agree, and you have to agree. That’s how logic is defined. When you ask everybody the same question, they always give you the same answer. Mathematics is a field that is derived from logic rules. For example, if you ask someone “What is 1+1”, they always reply “2”. It’s not because they think or feel the answer is “2”, it is because they’re told that the answer has to be “2”. By the definition of numbers and operations, “2” is the only answer to this question. Computers are designed with this same idea. At the core of a computer, logic operations are performed by the CPU. Computers will always produce the exact same output if they’re given the exact same input. This is the characteristic of any computers on this planet.

Today, many computer scientists are trying to push the limit of our computers further – to something so called “Artificial Intelligence”. Artificial Intelligence, or AI, is a term used to describe algorithms that simulate tasks that are normally accomplished by human beings only. As a result, decisions that were human-made in the past are now handled by computers, or to be more exact­, the programmers that came up with the algorithm. Some people today like to refer to this as “smart” technology. Camera manufacturers design AI algorithms so their cameras can “smartly” determine the “appropriate” shutter speed, aperture, focus and ISO speed for taking a photo. Translation software can translate an entire paragraph of text from one language to another. Robots can play table tennis. However, many AI algorithms are applied inappropriately, and in a lot of cases just make the problem worse. Professional photographers never use auto modes to take pictures; in fact professional cameras never include an auto mode, simply because the shoot settings calculated by the AI are far from the ideal settings for taking a photo. Language translators, up to this day, often distort the original meaning of the text, or even come up with grammatically weird sentences. As for the robots, I cannot understand why someone would design a robot to play table tennis. These ball games are designed for human beings for fun and health, not for the sake of ball games.

Putting computers into the seat of decision making is even worse when safety is a concern. In Netherlands, the Maeslantkering is a floodgate designed to protect the city from storm surges. The gates are entirely run by a computer; no human intervention can override the computer’s decision. The computer is programmed to close the gates if the surge height predicted by weather models is higher than 3 meters. During a storm, it happened that the computer predicted the surge height to be 2.99 meters, so it left the gates wide open. Obviously, no one living in the city would care if the water level is 1cm higher, but it matters to the computer!

Human beings must be involved in decision making processes, because unlike computers, humans can react to the situation creatively and come up with solutions that best resolve the problem. Despite the technology we have today, pilots remain essential in an airplane, because we believe that shall an emergency occur, pilots can react creatively and come up with the best way to solve the problem. Engineers cannot foresee each and every possible situation that can happen, so we need human beings in the cockpit to make decisions, despite of the fact that sometimes they make mistakes.

Another decision-making process that is more abstract and convoluted is the creation of arts. While some may argue that in the future computers can accomplish this job, they cannot. It’s not because of technical difficulties, it’s because this task is theoretically impossible to be achieved. Recall that computers are operated based on logic, every computer will give us the exact same response when the conditions we input to the computer are exactly the same. Can we convert the process of creating artistic materials into a logical process? No. Every one of us has a different feeling towards arts. I may look at a painting in a museum and see nothing; the guy next to me sees everything. We all interpret arts in a distinct way, a way that is based on things we have seen, the skills and talent that we have, our personal characters, and even our mood at that time. If we ask two persons to write some music, it’s very likely (if not for certain) that the two pieces of music will be completely different. Every person in this world gives us a different piece of music, because our DNAs are different, our experiences are also different. All the computers in this world, on the other hand, are always the same. This fundamental disparity distinguishes human beings from computers. So, it is not possible to come up with an algorithm that can replace human beings in art creation processes.

But what if we’re not trying to come up with an algorithm to write all music in this world, we’re trying to ask the computer to write the music that Beethoven himself would have wrote? If we are to simulate such creation process by a computer, we must answer the question of whether a human being is “computable”. That is, can be write an algorithm to simulate a human being?
The answer to this question is also no. To simulate a human being, we would have to simulate every physical and chemical change that occurs inside a human body. This requires simulation at atomic level. If we can calculate how each and every atom in a human body would behave, we can then work out all the chemical and physical reactions, and thus, simulate the human body. The truth is we cannot simulate every atom, because it is impossible to predict how a certain atom would behave in a given situation. It’s not even possible to measure the state of each atom. There is a theory in quantum mechanics called “uncertainly principle”. The short version of this principle is: if you measure the position of an atom to very high accuracy, then you would have a very low accuracy about its position, and vice versa. That is to say, we cannot theoretically take absolute accurate measurements; there are always errors to a certain extent.  Due to this error, we cannot exactly predict how each atom would behave. Quantum mechanics actually is a study of probability of atoms. For example, there’s 70% probability that the atom would pass through a certain region, and there’s 30% probability that the atom would be bounced back. It’s like throwing a dice, nobody knows the result. Fortunately, this probability is very useful because usually we’re dealing with a very large number of atoms when applying quantum mechanics. If we throw 6 million dices, then we can be quite sure that about one million of them will have the face “1” on the top. As we reduce the number of dices, this one-sixth prediction become less and less accurate. When it comes to one dice, it’s very hard to tell the result. The same applies to the atom case. This is why it is impossible to have physical simulations up to atomic level. Even if we can, we have to ask another question: is a human being a “computer”? That is, if I have two persons who are exactly identical, who share the exact same DNA and have the exact same experience, will they react exactly the same to a certain situation? I doubt it.

So at the end, what are computers good for, if they can never be as smart as humans? The answer is repetitive tasks. A lot of our economic activities involve mass production, which means producing a large amount of the same product to minimize cost. In the past, without technology, human workers are hired to this job. They have to repeat the same tasks over and over again, every day, every week, and every year. Unfortunately, human beings get bored at repetitive stuffs. As is turns out, these human workers do not enjoy much of their life. Today, these jobs are replaced with machines in factories. Machines can accomplish the same task over and over again, and more importantly, machines never get tired, get sick, get bored, or make errors. Machines are ideal for this job.

The decision making part is the part that should be left to humans. We’re not just living in this world for the sake of living; we’re living because we want to enjoy life. If the machines can create music, make paintings, feed us and talk to each other, why do we need to live? We’re not making computers into humans; we’re trying to make them help humans. Machines are merely the tools we use to accomplish our goals. In this context, it’s best to let computers or machines take over the dirty, dangerous or boring jobs on this planet, and let humans enjoy the sunshine at our beautiful beaches.

References
  1. Borel, Brooke. A Ping-Pong-Playing Terminator. 2 16, 2010. http://www.popsci.com/technology/article/2010-02/ping-pong-playing-terminator.
  2. "Maeslantkering." Wikipedia. n.d. http://en.wikipedia.org/wiki/Maeslantkering.

Monday, November 7, 2011

Cloud Computing

What is Cloud Computing?
Cloud Computing is a new way of using resource in the internet. This technology is called ‘Cloud Computing’ because it stores resource in the internet and the internet is ‘symboled’ by a cloud in the flow chart. The main concept of Cloud Computing is storing everything on the server. Cloud computing can be divided by three main part: software as a service (SaaS), infrastructure as a services (IaaS) and platform as a service (PaaS).

The concept and technology of Cloud Computer is very sample. Nowadays, there are abundance of companies starting developing Cloud Computer. It is high time to look into the matter in depth of its advantages and disadvantages.
 
Advantages of Cloud Computing
1. Internet space is unlimited
Internet is a platform that storing resource. As there is no limitation of number of servers so that internet, this apace, can is actually infinite. Moreover, internet is highly expandable where it can be expanded in a very easy way, and alongside, in a low cost, While user uploads or stores their resource in the internet, that will really solve the problem of limited size of user’s computer. The less resource storing in computer, the faster computer can run for application. For example, now APPLE is focusing on ICOULD which is a technology applying on iphone, where users’s photo, information, songs are all storing in the internet so as to solve the limited storage of iphone, even though it  already had a very large storage from 4GB to 8GB.

2. User friendly
The main reason is that Cloud Computing can actually allow users having a lower professional IT knowledge. As not only the resource, but also users can use the software on the internet. In this way, Cloud Computing is user friendly not only it skips the step of installing software but also saves the money for users to buy the software. Most importantly, it provides a great convenient and agileness for multi-users  to share their information.  Once resource stored in the internet, no matter user can whenever, wherever get their source if user can connect to the internet. In these day, abundance of companies use ‘Private cloud’ which is infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.

Disadvantages of Cloud Computing
Nevertheless, there are still some problems of Cloud Computing. As Cloud Computing storing resource in the internet, that means storing resource in great amount of servers. Normally, those servers are highly concentrated on a place with a view to managing them easily. However, once there is an accident, for example, unexpected cutting power or fire, there will be a catastrophe for the users. In addition, the more servers we have, the more electricity we need to run it because it is 24-7 non-stop. According to the research, the cost of Google run the server is not cheap, once Cloud Computing is implementing thoroughly all over the world, the cost of support it is really reasonable? Is it worth? And also will it cost to another problem such as environment protection issue, as the center should build near the river because of the water for cool down the server? These are only the tip of the iceberg. Most importantly, the fatal limitation of Cloud Computing is that, if there is no network available, nothing user can do.

Sunday, November 6, 2011

Music files = MP3 ?!

    .AAC, .OGG, .flac, have you ever heard about the above file formats? No? Then, what about mp3? I am quite sure that you have heard about it, even if you have never tried downloading it. But what if I tell you that all of them are actually audio files that they serve quite the same function as mp3, i.e. storing audio data? Here, I would try to introduce to you what mp3 is and whether there are alternatives to it, which shall be shown later to possess some kinds of drawbacks.

    Mp3 is widely used as synonym to music files. It is popular in field of portable or digital music playing and is well supported by a great number of both hardware and software. The term “MP3” refer to an audio format called “MPEG-1 Audio Layer III” or “MPEG-2 Audio Layer III”. It is a kind of audio format that compresses and stores raw audio data with a lossy method, producing a much smaller file than the original source, with quality comparable to the original one. Typically, an mp3file would be 4 to 11 times smaller than the raw audio file, depending on the compression rate. Its notably reduced file size makes it popular, especially in the days when hard disks stored up to tens of gigabytes and the network speed was slow.  However, the reduction in file size comes with cost. Now, let’s look at how the compression works first.

    Mp3 manipulates a perceptual limitation of human hearing called auditory masking. Sometimes when you listen to two sounds, one would become inaudible, or we say, being masked by another sound. This is quite the principle of auditory masking. Of course not all sounds mask the others, however, through extensive experiments, the general mechanism can be known. Simply speaking, mp3 makes use of the mechanism and filters out the sound that is not audible to human in a sound clip, hence greatly reducing the data content, resulting in reduced file size.

    However, the above is just what a theory tells. In practice, there are some constraints.  No matter how well the theory or mechanism work, as a lossy compression format, it unavoidably losses some of the audio data after compression. The maximum data an mp3 file can store per second of the audio clip depends on its bit rate. Typically, a soundtrack on a CD has a bit rate of 1411.2kbps.Bit rate of an mp3 file is confined to several levels, with the lowest as 8 kbps and the highest as 320 kbps. The trick is on how to retain as less data as possible while maintaining quality comparable to that of CD by using auditory masking. But of course, for files with very low bit rate, any algorithm won’t help, i.e. mp3 file with too low bit rate still sound bad. Officially, it is suggested that a 128kbps mp3 file should sound like a CD. However, this result is subject to change under different circumstances and with different audience. This brings out the second constraint of mp3. Mp3 algorithm depends on the perceptual limitation of humans, however, different people have different audial sensitivity. One would find a 128kbps mp3 acceptable while another may finds it unbearable.

    Moreover, there are some technical and legal concerns on using mp3. Mp3 was developed about twenty years ago, there exist some technical defects in the algorithm, such as not being able to record sound with frequency above 15 kHz, while human can hear frequency up to 20kHz. Legally, mp3 is a patented audio format. The use of it incurs license fees. Be minded that the license fees here are for the compression algorithm itself, not relating to copyright issue relating to the music clip. The story is made more complicated as several companies are claiming ownership of patents related to mp3 while someone argues that it should be patent-free as in the United States, patents cannot claim inventions that were already publicly disclosed.

    Mp3, due to its compact file size, fairly good quality and, more importantly, its history, is now very popular that other audio formats are not well known by the public. Here, I just want to point out some of them. AAC is a newer and more advanced audio format designed to be the successor of mp3. AAC generally achieves better sound quality than MP3 at similar bit rates. .OGG normally refers to audio files containing an audio format called Vorbis. It’s worth mention as it is a patent-free open-source format while providing audio files with quality higher than the other lossy audio formats. Nowadays, more and more applications are supporting ogg, alongside with an increasing number of games using ogg for audio effect. Note that the above two are both lossy audio format, i.e. upon compression, some data from the raw audio clip is lost irreversibly.  There is another type of compression format called lossless compression, i.e. all the data is conserved in the compression process, while producing a relatively small resultant file. Of course, it cannot produce files smaller than that by lossy method, but it is adored by audiophiles as it can output audio with quality exactly the same as the original CD. FLAC is one of the lossless compression formats. It provides satisfactory size reduction while requiring relatively low computational effort. Again, it is patent-free and open-source.

    I believe that digital music shall be the main trend afterwards. By now, you should have better understanding about mp3 and other audio formats. I don’t mean to ask you to abandon mp3. Here, I just want to remind you that what you hear may not be the whole truth and introduce to you some of the more advanced format, especially those that are open-source and patent-free, which I think would dominate in the future.

Saturday, November 5, 2011

The Environmental Impact of Google Search Queries

Howard Kwok

Under this rich era of the development of human civilization, the development of information technology has flourished. The exploration of networks such as the World Wide Web have become an integral section of our daily lives, made easily accessible through search engines such as Google, which allow quick queries to be made for locating suitable websites. This survey aims to explore and understand the environmental cost of using internet search engines such as Google, and hopes to encourage further contemplation upon whether using the internet truly provides  a sustainable carbon footprint, and gives food for thought upon viable alternatives.
The issue was first revealed through a study made by Dr Alex Wissner Cross at Harvard University, which was submitted for peer review to further consolidate its reliability. Dr Cross’s findings, upon being published, found that websurfing “contributes to a greater global carbon footprint” than the entire aviation industry from 2007 onwards. He then went on to describe how the mere act of typing in a Google search query uses up a significant amount of energy, of which such usage gives a negative impact upon the environment.

Many internet users might be surprised and perhaps even skeptical of the amount of energy consumed in a single Google search. Resultantly, quantitative results shall be drawn from Dr Cross’s study and summed up here in order to fully communicate to the reader the magnitude of the energy cost of supporting a Google search. In simple layman’s terms, websites store their files in servers, connected via an intricate system of networks, which in turn are viewed by the viewer’s Personal Computer. This entire system runs on electricity, generated from power plants and the combustion of fossil fuels. According to Dr Cross, each search query made causes 5-10g of CO_2 to be produced by power plants. To put these figures against the context of daily life, two search queries would provide approximately enough energy to “boil a kettle” of water for tea, according to the UK Times. When one multiplies this figure by the millions of Google users, the energy consumed here reaches garantuan proportions. Furthermore, the mere act of staying upon a website produces around 20mg of CO_2 a second, but this figure can go up to 300mg of CO_2 a second when the website produces “complex” animations and video. In short, the energy consumed by staying on a website depends upon its complexity of layout, and in modern-day society, many websites can produce very flashy and user-friendly homepages with the purpose of attracting more visitors. As such, the consequences do not seem to produce a very good impact upon the environment.

One reason, according to Dr Cross, behind the massive energy use of Google searches is the “unique infrastructure” of its search engine when a user is making queries. Upon making a query, the relevant query is replicated to different servers which then compete to provide answers within the shortest time possible. As Google has many gigantic server farms which range across Asia to Europe, searches made through the search engine are generated at a high speed, providing optimum user experience. Unfortunately, the energy needed to transmit messages across different server farms at such distances is significant towards creating  a large carbon footprint.

Google has published a rebuttal of Dr Cross’s findings, setting up a blog post which stated each search only generated 0.2 g of CO_2 per search, and that Dr Cross’s findings were mainly exaggerated. It futher stressed that it was committed to providing the fastest, most accurate searches with the maximum efficiency and it had made many contributions towards a green approach. According to Google Green, sustainable energy was already being used, with servers already running on 30% energy provided from renewable sources such as wind and solar power. They also invest in projects in the outside world to reduce CO_2 emissions to allegedly “offset” their own personal emissions to create an altogether 0 carbon footprint.

The different numerical values provided by Google and Dr Cross respectively caused a ‘blogstorm’ across the web, with many alleging that Google deliberately avoided the issue and ignored Dr Cross’s findings. However, I personally believe such arguing is irrelevant, as the main issue should be focused upon here: the gigantic energy consumption caused by the use of Information Technology altogether. Combined, the IT network provides 2% of global greenhouse gas emissions, which is quite a horrifying figure when one realizes that it is more than the entire aviation indusry emissions combined.

Many issues can be realized here.

References
  1. http://technology.timesonline.co.uk/tol/news/tech_and_web/article5489134.ece
  2. http://www.timesonline.co.uk/tol/news/environment/article5488934.ece
  3. http://www.googleguide.com/google_works.html

Friday, November 4, 2011

Grid Computing – Now and future development

There are lots of problems and questions need to be solved in academy. It is time consuming for scientists to solve the problems even they use supercomputer to compute the result. However, the grid computing do a great job on solving some complicated problems that should be done by supercomputer traditionally. Although grid computing is not so popular in normal computer users compare with cloud computing and cluster computing, grid computing is quite popular on solving some professional problems such as scientific and mathematical questions.

What is Grid Computing?
Grid computing refers to the combination of computer resources from heterogeneous computing device to reach a common goal. Although grid computing is a kind of distributive computing, it is different from cluster computing-based systems. In grid computing, each device can be widely spread.  The devices need not be the same computational architecture. They need middleware to divide and allocate the job that the devices are responsible to compute through network which is typically the Internet or Ethernet. There are several big projects using the grid computing to solve the problems, for example, Genome@Home, Folding@Home, MilkyWay@Home etc. All these projects contribute a lot to human society.

Advantage of using Grid Computing
There are many advantages to apply gird computing in scientific, mathematical and academic problems through volunteer computing. This means the people, who share their computing power from their device, will not get any money from institutes. It is voluntary base. The institute does not need to pay a lot to obtain a great computing power. Also, the cost of combining multiple processors in a normal Personal Computer is far lower than cost of making a tailor-made supercomputer due to the mass production of normal processors lower the cost of normal CPU. With grid computing, multiple devices can produce similar computing resource as multi-processor supercomputers. Hence, cost is relatively low compare with setting up a tailor-made super computer to solve the problems. Besides, the institutes need not to have lots of space to distributive computing-based system or a super computer. The maintenance cost can also be saved. 

Disadvantage of using Grid Computing
Although grid computing can lower the cost of computation, there are some restrictions. Firstly, as the devices in grid computing may not have stable and instant connections with other devices, the computation should be dived in the part that is highly parallel and can be solved independently. This increases the difficulty on designing the programme. Moreover, grid computing is mainly relying on voluntary computing. This limits the maximum computing power of grid computing.

Future Development
In past few years, grid computing mainly relies on the CPU as the main processing source. More and more computing power is made by the Graphic Processing Unit (GPU) and the Cell processor in the PlayStation 3 (PS3). These processors have higher Floating Point Operations (FLOPS), which is the main operation of scientific problems, than the traditional CPU in traditional PC. As GPU is not always have heavy work, there are more idling time compare with the CPU. Also, it is impossible for the gamers to play with their PS3 all the day. That means more available computing power. Therefore, the computing power from the GPU and PS3 can be easily obtained compare with CPU. In the future, there are more PCs equipped with high computing power GPU to fulfill the needs of multimedia. With certain promotion of grid computing, it is believed that grid computing will provide a considerable computing power and recycle more idling computing power.

Secondly, more devices have high computing power, such as mobile phones, tablets and Home Theatre PCs (HTPC). Their computing power is increasing while their power consumption is decreasing. They can be the potential devices of gird computing. Plus, development of high speed connection to the Internet through traditional connection and wireless connection makes grid computing easier than before. It is believed that more computing devices can join the grid computing and share the idling computing power. At the same time, the institutes can join their supercomputers and mainframe to speed up the processing. This may form an ultimate virtual supercomputer.

In shorts, grid computing will still develop to help scientists to study difficult problems in future. With more computing devices are available to join grid computing model. The potential of developing grid computing model is unlimited. Also, it is better to promote people to join the grid computing to share their idling computing power in order to greatly boost up computing power of grid computing. Let their devices to join the grid computing is better than idling their devices and do nothing. At least, there are some contributions to society. It is no doubt that developing grid computing is beneficial to us.

Reference
1.    Grid computing – Wikipedia. Retrieved October 2011 from Wikipedia: http://en.wikipedia.org/wiki/Grid_computing
2.    Folding@home. Retrieved October 2011 from Folding@home http://folding.stanford.edu/English/Main
3.    Folding@home – Wikipedia. Retrived October 2011 from http://en.wikipedia.org/wiki/Folding@Home

Thursday, November 3, 2011

Applications of the recursive computing paradigm

Recursion is a computing technique that helps solve complex problems in a simple way, by using the result of the previous value of that function and finding a relation between the previous result, current result and the current value. The result we use would have been obtained using the answer for previous value, and the cycle would go on till we would reach a value for which we know the answer. This value, also referred to as the ‘base case’ would give us the answer to all the other cases, known as the ‘recursive cases’.

 For example, let us calculate the result of n! (Factorial of a number n) using recursion. In recursion, we  will use the value of n-1!, and multiply number n to it. To obtain the value of n-1!, it would use the value of factorial of n-2, and so forth. This continues till it reaches a value for which we know the answer. In this example, we know that the value of 1! is 1. Thus we would keep reducing the problem from n to n-1 to n-2…to 1, for which we know the answer. We would use this answer to find values of all unknown successive values. So we multiply 2 to 1! to get the factorial of 2 and we keep multiplying successive numbers to get the value of their factorials. The basic idea of recursion is to reduce a problem, and take it to the step for which we know the value.

Recursion is very useful in solving complicated problems, such as the traditional problem ‘towers of hanoi’.In this problem, we are given n number of disks, and our job is to transport the disks from tower A to tower C (as shown in the figure). However, we can only move one disk at a time, and also we cannot place a bigger disk on a smaller disk.
Picture Source: http://www.alper.net/wp-content/uploads/2008/03/hanoi.jpg
The problem is very complicated to solve with iteration. However, it is a small and comparatively easy to understand problem with the recursive approach. Here is the approach-

a)    Consider it is solved for n-1 disks, and we can move n-1 disks at our disposal.
b)    So, we move n-1 disks to tower B.
c)    Then we move disk n to tower C.
d)    Now, we can move the n-1 disks to tower C, over disk n, and the problem is solved.
e)    The problem for n-1 disks would have been solved by using the same procedure replacing n by n-1 and n-1 by n-2. We would keep reducing the problem, till we reach a value for which we know the answer. In this case, we know that for n=1, we move the disk 1 directly to Tower C.

Thus in recursion, we just need to find a relation that gives us the answer for ‘n’ and we believe that we know the answer for n-1. Though, in reality we do not know the answer to n-1. But we find that out by finding out the answer for n-2, n-3 and so forth till we reach a given value for which we know the answer. Hence, recursion makes problem solving easy compared to other approaches.

However, the disadvantage of recursion shows when it comes to computing. Recursion is a slower and more space taking process than iteration, thus making its efficiency lesser than iteration for the same algorithm. Recursion requires more space as it stores the values to be calculated in ‘stacks’. It is slower as function calls take time compared to calculations within the same function. To understand this, we can compare the computer to our brain. For example, if we were given a problem to add all numbers from 1 to 5. The iterative way would require us to add 1, and then add 2 to it, and result in us just adding just all numbers from 1 to 5 without thinking anything. However, a recursive approach would require us to start from 5, and add to it the sum of numbers from 1 to 4. This would require us to keep 5 in our brain and find the sum of numbers 1 to 4. This in turn would require us to keep 4 in our brain and solve for all numbers from 1 to 3, and repeat the cycle till we reach the base case, which is adding all numbers from 1 to 1, which we know is 1. So, recursion would require us to keep numbers from 2 to 5 in our brain which requires additional space. Also, it makes calculation slower than the iterative procedure, as we can see that it is more complicated for us to try and call the ‘function’ again and again, rather than simply adding the consecutive numbers as a part of the same function.
But we cannot forget the importance of recursion in daily life in solving complicated problems. Problems such as the towers of Hanoi are made so much simpler by using recursion! Also, recursion at times has solutions to problems which are faster than their iterative equivalents. For example, quick sort, an application of recursion is one of the fastest sorting techniques used in modern world!

Wednesday, November 2, 2011

A brief overview on the computing phenomenon of selling virtual currency for real money and how extreme it has become

Andrew Yeung

First picture this:
Log onto an online game. See a monster. Click on it. Kill it. Receive gold for the kill - If only it was possible to earn money like this in real life.
Now envision this:
Tons of workers are cramped into a stuffy room in a developing country, as they slave away to make products for the western world - Just another sad reality of a typical sweatshop.
Combine the two, and what do you get?

Hold that thought.
Ever since the breakthrough of the internet, one industry in particular has flourished: the online video game industry. This is especially the case for Massive Multiplayer Online Role-Playing Game (MMORPG) industries, who offer the player a chance to be whoever and whatever they want in a fictional universe. The chance to escape from reality for a while was too good to pass up.

However, so often is with humans, the line between fiction and reality begins to blur. And here’s another area in which the video game industry has ‘flourished’ – showcasing the ugly impacts computing has had on society. We’ve all read about the cases of gamers who have died from being too addicted and gaming for too many hours straight, neglecting their real-life body in favor of their virtual one. Also came the cases of people deciding that their virtual life was more important than their real ones, and committing suicide when their accounts got stolen. These cases, while tragic, have thankfully remained the exceptions amongst the myriad of gamers amongst today’s massive gaming community. Nonetheless they still serve as reminders of the extent to which humans can be influenced by a virtual reality.

Therefore, it should have come as no surprise when the aspect of economics inevitably found its way into the realm of gamers; when one day someone wondered if he could actually sell virtual items for real money, and somebody else wondered if it was possible to buy in-game items for cash, supply and demand was born..

Since then, this phenomenon, come to have been known as ‘gold farming’ has been commonplace in many of the world’s biggest gaming communities, namely Everquest, Eve online, Lineage, World of Warcraft, and even Runescape (!),  amongst others. Although it is a practice that is usually frowned upon, and in some cases actively opposed by the gaming companies, the ongoing demand for such a ‘service’ will ensure the survival of its supply, with this ‘industry’ worth an estimated $3 billion USD (infoDev, 2011) as of this year. Enter the words “World of Warcraft Gold” into the other computing phenomenon known as Google, and instantly around 46 million results pop up, a majority of them trying to sell you their virtual gold. Instant supply right there.

However, as I will examine, the nature of this supply has evolved in a way that one might have found hard to imagine a few years ago. The game I will investigate is the most popular MMORPG in the world today: World of Warcraft.

World of Warcraft, otherwise known as WoW, has been a hugely popular MMORPG since its release in 2004, steadily amassing a huge player base that stands at 11.5 million players today (StrategyInformer, 2011) , a number even considered to be low, as WoW continues its 7 year reign over other games of the genre. Naturally, one would rightly assume that there exists a demand for WoW gold farming. But what one might not know is the face of the person farming behind the screen. He may be just a kid with way too much spare time on his hands, you think. Maybe that really is the case. Or maybe…

Allow me to draw you back to the scene presented at the start of this survey.

Welcome to the world of China’s gold farms.
 

In these so called “farms” that resemble sweatshops with computers, Chinese “gold farmers” hunch over monitors as they repetitively slay in-game monsters for money. Their shifts may go up to 12 hours or more, and it is not uncommon to find farmers sleeping on the floor, exhausted and resting for the upcoming grueling 12 hour grind.  Farm size and working conditions vary greatly from place to place, but the nature of the job is always the same - simple and mind numbing: Kill monsters. Earn gold. Kill more monsters. Earn more gold. Repeat.

Earning an average of around 100USD a month, an estimated 100,000 Chinese workers are employed on these farms in 2007 (UCSD News, 2007). While it is not easy to document the statistics of such an unconventional, under-the-radar industry, and statistics do vary, it is certain that this number will have increased since then.

At the end of the day, what implications does this have for the state of our society, where the life and income of a person in a developing country are determined merely by the computing parameters set for a piece of software, namely the game, by a gaming company in a developed country? Has technology reduced able-bodied people into mind-numbed mouse clickers whose wealth (or lack of it) is all determined by one simple line of computing code that decides how much gold a monster drops? For someone’s life to be defined by how many pixels he clicks on?
Despite this, it’s not all bad though. Workers on some of these farms have bonded together as they work, eat and game together – an interesting social advancement for a virtual community to transcend into a real-life one. Furthermore, it cannot be denied that this sector provides employment options for people and gives them an income opportunity. Mundane as the job may be, as one gold farmer describes: “Working in a room made safe for computers is going to offer better conditions than working behind a plough in some field” (The Times, 2006).
However, as is always the case with humans, there is a sinister side to things that not many know of.

The scene is set in a Chinese prison. As part of their labour regime, prisoners are forced to break rocks, dig trenches, carve chopsticks or assemble other products and…you guessed it. Play World of Warcraft. That’s right.

All across Chinese labour camps, prisoners are being exploited to farm gold, even getting physically punished if they fail to meet their work quota. The shifts are a grueling 12 hours ON TOP of the prisoners’ physical labour, making it hell to endure. It has been said that this lucrative operation could make up to 5000-6000RMB a day, earning from this gross fashion of exploitation (The Guardian, 2011). Who would have known that killing a virtual monster for virtual currency could have such corrupt implications behind it?

Whatever the case, computers and computing have definitely changed the world, in both obvious ways and less obvious ways such as the odd phenomenon of gold farming. While it undeniably brings benefits to select groups of people, there are also social and ethical considerations to explore, that implore us to take a good hard look at ourselves and the directions in which technology has allowed us to advance. The extent to which some people are reliant on small computing processes which we take for granted is seriously overwhelming, and as is the unfortunate case with humankind, survival techniques that lead to profit often lead into exploitation.

One thing is for certain. If you are a gamer, the next time you turn on your computer, log onto your game, kill a monster and pick up that gold…be thankful that you have the luxury of doing it for fun.

References:
Pictures retrieved from: http://www.newmedici.com/wp-content/uploads/2009/04/gold-farming-china-wow7go-530.jpg, and UCSD News (See below reference)
infoDev. (2011). Converting the Virtual Economy into Development Potential. Retrieved October 2011, from http://www.infodev.org/en/Publication.1056.html
StrategyInformer. (2011). World of Warcraft population dips to 11M subscribers. Retrieved October 2011, from http://www.strategyinformer.com/news/12312/world-of-warcraft-population-dips-to-11m-subscribers
The Guardian. (2011). China used prisoners in lucrative internet gaming work. Retrieved October 2011, from http://www.guardian.co.uk/world/2011/may/25/china-prisoners-internet-gaming-scam
The Times. (2006). Gamers’ lust for virtual power satisfied by sweatshop workers. Retrieved October 2011, from http://technology.timesonline.co.uk/tol/news/tech_and_web/article648072.ece
UCSD News. (2007) By the Sweat of their Browser. Retrieved October 2011, from http://ucsdnews.ucsd.edu/thisweek/2007/04/23_goldfarmers.asp

Tuesday, November 1, 2011

Microblogging: Twitter

Seitzhan Madiyev

The Information Technology industry has been surprising us slightly. After the creation of Google search engine and the “internet bubble” crisis in 2000s, it began developing a little bit slower and in different direction than many expected. People were predicting about future IT industry; how it will change the way we live. As the time has passed trends that were predicted in that time did not achieve their expectations, or simply did not get enough attention from the community. But instead other projects came into place; those which most of the people did not even think about ten years ago, but without which a lot of people cannot imagine their lives nowadays. The top trend of this decade was given to a group of products that can be called “social networks”. And between monstrous creation of Mark Zuckerberg and a huge failure of “MySpace” there was one more invention which I believe will have an even greater impact in the nearest future “Twitter”.

How it works
Twitter is a simple microblogging service which allows users to send and read text-based posts of up to 140 characters. When a user wants to read public messages sent by another user of his interest, he/she simply connects to it through a button named “follow”. And those who follow the user will get all the messages sent by him. It was firstly described by its creators as a service that uses SMS to tell small groups what you are doing. When people create an account there, the site simply asks you to share with others “What are you doing?”.

Socio-political significance
Twitter was often criticized for uselessness since its users were bringing a lot of “rubbish” and meaningless information, like “I ate a sandwich”. However, since 2008 the project took an important socio-political significance. In February 16, 2008 the photographer James Buck was arrested by Egyptian police. He wrote “arrested” in his twitter account which quickly came to the US authorities. The next day he was released. During the U.S. presidential elections Twitter was actively used by both candidates Barack Obama and John McCain for their campaigns. It was blocked in many countries for the purpose of not spreading information of some events.
When the management team understood the importance of their service, they have changed the business model the question has been changed to “What’s happening?”. The society started to realize the importance of the project. It became the fastest media platform to release the latest news and updates on almost all events occurring in the world. Now even media outlets are forced to get the newest information from Twitter with the risk of reporting false information. It has become the most efficient media platform, since all people who have an account there have become the reporters.

A lot of people argue that most of the messages that occur in twitter are so called “Pointless Babble”. So the research has been made to analyze what people actually share with others, and the results were interesting. Pointless Babble won with 40.55% of the total tweets captured; however, Conversational was a very close second at 37.55%, and Pass-Along Value was third (albeit a distant third) at 8.7% of the tweets captured. (Twitter Study) Pass-Along Value is the messages that actually matter; those are the newest news, relevant information and shares of interesting thoughts by famous politicians, business people, etc. Although it is only 8.7%, but it is still large in general terms since around 200 million tweets are sent every day.

New business model
Twitter has created an absolutely new way for businesses to promote their products and for people to promote themselves. Since 2000s marketing has been moving more and more towards the Internet, and just few years ago so called “social networks” helped to create new business models and increase the efficiency of marketing. One of the most important tools created because of social network is the “target marketing”. It simply allows businesses to find the right audience to promote to dividing all the users by different groups. But if social networks like Facebook are efficient platform for targeting adverts, how Twitter has created a new way? It is simple: “Brand Journalism”.

Nowadays adverts don’t get enough attention from public and become less and less efficient, and that is where brand journalism might come in the nearest future. Brand journalism was created as a tool to promote products and companies in an innovative way. A company can hire a professional journalist which can write only about this company, its everyday activities, new products, the chronicles of what has happened. People are interested to hear new stories, and Twitter is an ideal platform to tell those stories, to share photos and links.

I believe that Twitter has not achieved its peak yet, and it has more to come in the future. From being on of the most innovative products of these years, it might become an essential part of our lives.

References
  1. MG Siegler. Russian President Medvedev Sends His First Tweet At Twitter. — TechCrunch, 23.06.2010
  2. Chris Nuttall. What’s happening? A lot, says Twitter COO. — The Financial Times Tech blog (blogs.ft.com/techblog), 20.11.2009
  3. Twitter: "pointless babble" or peripheral awareness + social grooming? — Danah Boyd blog (www.zephoria.org), 16.08.2009
  4. Owen Fletcher, Dan Nystedt. Internet, Twitter Blocked in China City After Ethnic Riot. — PC World, 06.06.2009
  5. Om Malik. A Brief History of Twitter. — Gigaom.com, 01.02.2009
  6. BBC admits it made mistakes using Mumbai Twitter coverage. — The Guardian, 05.12.2008
  7. Claudine Beaumont. New York plane crash: Twitter breaks the news, again. — The Telegraph, 16.01.2009
  8.  John Brandon. Barack Obama wins Web 2.0 race. — ComputerWorld, 19.08.2008
  9. Twitter Study. — Pear Analytics. — August 2009

Sunday, October 30, 2011

The influence of Smartphone Apps on Handheld game console market

LAM YAT HANG

    Starting from 1976, with the invention of the first handheld game console, playing video games was not restricted at home or game center anymore. We started to see people playing video games when waiting for buses, queuing for tickets or even attending lectures. Handheld game console allows us to have fun anytime at anywhere.

    More than 40 years have passed, today handheld game console is still very popular among people. It is not difficult to find people holding a PSP or NDS on the street. PSP and NDS, as the two consoles having the most competitiveness in the market of handheld game, starting their competition in 2004. They were both first introduced in 2004 and hit the market at that time. Until now, the PSP has sold for more than 71million units and NDS has sold for more than 147million units. With the release of new versions of the two consoles, their sales are still going. In 2009, the NDS shared 70% and PSP shared 11% of the portable game software revenue in the U.S. It seems that in the market there is no another console can complete with them. However, if you also consider mobile phone as one of the console, they really get a strong competitor.

    Many people may wonder if mobile phone can be considered as one type of consoles. Before appearance of the iOS and the Android system, the games installed in the mobile phones were usually simple games like “Snack” and “Minesweeper”. The mobile phone games usually had lower quality than that on traditional game consoles and didn’t be very popular. Therefore, people didn’t consider mobile phone as one of the handheld game consoles. However, with the appearance and popular of the iOS, Android system and Smartphone in recent years, game developers start to put more money in this market to develop more and better games. The games are not simple any more, some of them even have a big hit in the world like the Angry Bird and Fruit Ninja. These years, we can usually see people playing video games on the street, holding not a NDS or PSP, but a mobile phone. Due to its popularity and success, many people start to consider mobile phone as one of the handheld game consoles.

    Is that mobile phone really intimidating other consoles? Recently a research find that the market share for iOS and Android in the U.S portable games revenue raise from 19% in 2009 to 34% in 2010 and is predicted to further increase in the following years. While the market share of iOS and Android system increase by 15% in a year time, the one of NDS and PSP decreased 13% and 2% respectively. Many researcher believe that appearance of Smartphone and its applications, which called as “Apps”, is one of the factors that make the market share of traditional game consoles decline.

Some news also reported that the Nintendo Corporation, the corporation which invented NDS, has classified the iOS and Android system as their strongest competitor in the market of handheld game.

What makes Smartphone a strong handheld game console? Mobile phone is considered as a must-have item for people living in cities. In 2011, the Smartphone makes up 40% of all mobile phones in the market in the U.S and is expected to increase further in the future. It shows that Smartphone is also becoming more and more popular. Before the appearance of Smartphone, people need to bring an additional device like NDS or PSP with them for playing video games. Many people at that time need to bring their cell phone together with a console when they went out which was a little inconvenient. Many people did not own a cell phone and a game console at the same time. However, Smartphone changes the situation. Smartphone combines mobile phone, game console and even computer together. People bring a Smartphone with them are just like bringing a game console at the same time. With the device on hand, people are more likely to play video games. Although not all people use Smartphone to play video games, but a research find that 29% people will do so. With the amazing sales of Smartphone in the recent years, Smartphone had sold for more than 100million units in the 4th quarter of 2010, it can be said as the console having the most consumers.

Apart from that, Smartphone also provides a more convenient way for their consumers to get video games. The video games in the Smartphone are downloaded in form of applications, which usually called as “Apps”. All Apps are downloaded from the web and no exception for the game Apps. When people want to have game, there is no need for them to go to a shopping centre or waiting for the delivery if they buy it online. All the actions for buying Apps are being instant. The consumer just need to press an icon and their Apps will be downloaded and installed in their Smartphone. From the time that a person has a idea to have a game Apps to the time which he can play the game, the whole process takes no more than 5 minutes. The convenient of the process encourage people to enter the Apps market and have their game Apps. The sales revenue and the game developers will definitely be beneficial to this.

With the increase in the market share for Smartphone in the handheld game console, it is expected that more game developer will enter the market and provide more games for consumers. With more choices, more consumers will also be attracted. This forms a virtuous cycle and the Smartphone market share in the handheld game console is likely to further increase in the following years.

Reference
  1. http://blog.nielsen.com/nielsenwire/online_mobile/40-percent-of-u-s-mobile-users-own-smartphones-40-percent-are-android/
  2. http://blog.nielsen.com/nielsenwire/online_mobile/mobile-snapshot-smartphones-now-28-of-u-s-cellphone-market/
  3. http://thegadgetsite.com/2011/04/nintendo-ds-and-psp-losing-market-share-due-to-android-and-ios/
  4. http://www.weiphone.com/iPhone/news/2010-05-15/Statistics_say_iPod_iPhone_hit_DS_PSP_sales_216609.shtml
  5. http://news.newhua.com/news/2011/0104/112578.shtml
  6. http://blog.sina.com.cn/s/blog_5025e3880100ot99.html
  7. http://www.gamasutra.com/view/news/37715/comScore_29_Of_US_Mobile_Phone_Subscribers_Play_Mobile_Games.php
  8. http://www.hksilicon.com/kb/articles/34964/iOSAndroid-2015
  9. http://www.eurogamer.net/articles/2011-09-14-ps3-worldwide-sales-reach-51-8-million
  10. http://www.nintendo.co.jp/ir/library/historical_data/pdf/consolidated_sales_e1106.pdf