Saturday, November 22, 2008

Software Piracy


Software piracy is a criminal act which is severely punished in many countries. Software piracy is the act through which software is stolen and distributed by means unintended by the software author or company. Software piracy is taken very seriously is some parts of the world and lightly in others, but the main theory behind it is that a person is considered a thief when caught stealing but not portrayed as a thief if license agreements are violated, which are set by the software company, and regardless of these laws the software is used and spread illegally.

The definition of a free software does not mean its free of charge when purchased from a store. It means there are no license agreements imposed on the user to agree upon and the user may distribute it anywhere regardless of the place and the number of people. When the software is purchased from the store it will obviously have a price tag because the company making these software need to make some profits. Therefore the term free here means 'free as beer' or 'free as coffee' for countries like India and Bangladesh, where the user is not bound by the company to any licensed agreements.

When a software is not free for the user the company has several laws to protect their software to be distributed without their permission. When a user makes copies of the software and spreads it in the community a lot of unlicensed users are created and this is a typical story of software piracy. The person spreading the software 'illegally' may face strict charges which may include community service to being fined a massive amount of money or also may face prison.

Violation of Privacy

When a person uses a 'non-free' software, it requires serial numbers as a protections for the installation process and the user must agree to the license agreements before installing the software. If that person happens to use the pirated version a 'crack' file would be needed to break through the security and access the software.

There are several dangers in this process. The 'crack' file is made by a group of people called the Crackers and their job is to make cracks and help distribute pirate softwares. Along with making these crack files they also insert malicious software in the crack which run the background such as spyware and malware.

Malware is software designed to infiltrate or damage a computer system without the owner's informed consent. The expression is a general term used by computer professionals to mean a variety of forms of hostile, intrusive, or annoying software or program code. Malware is not the same as defective software, that is, software which has a legitimate purpose but contains harmful bugs. A software is considered malware based on the perceived intent of the creator rather than any particular features. Malware includes computer viruses, worms, trojan horses, most rootkits, spyware, dishonest adware, crimeware and other malicious and unwanted software. According to the law, malware is sometimes known as a computer contaminant.

Spyware is computer software that is installed stealthily on a computer to intercept or take partial control over the user's interaction with the computer, without the user's informed consent. While the term spyware suggests software that secretly monitors the user's behavior, the functions of spyware extend well beyond simple monitoring. Spyware programs can collect various types of personal information, such as Internet surfing habit, sites that have been visited, but can also interfere with user control of the computer in other ways, such as installing additional software, redirecting web browser activity, accessing websites blindly that will cause more harmful viruses, or diverting advertising revenue to a third party. Spyware can even change computer settings, resulting in slow connection speeds, different home pages, and loss of Internet or other programs. In an attempt to increase the understanding of spyware, a more formal classification of its included software types is captured under the term privacy invasive software.

Having malwares and spywares are not only harmful for the operating system and the user's privacy, they can also cause financial losses for the user through internet money transactions. There may be hackers and cracking out there embedding malicious softwares in personal computers to get the credit card information of unsuspecting users making them bankrupt.

Spamming

Spamming is another issue that arise may due to spywares. Spammers collect email addresses and usually send a lot of junk-mails to those addresses which are usually attractive advertisement which may match the user's online shopping preferences. Its much more sinister than a simple mischievous ad-campaign. Some of the advertisements may be fake and just another crafty way to get the unsuspecting user's credit card information.

Spamming does not only stop there. There are some spammers who may use hundreds and maybe thousands of personal computers as 'Zombie PCs'. This means the spammer would make sure the user grants them access to the computer the moment they connect to the internet. Then without the user knowing that computer would be used by the spammer to send mails to thousands of email addresses. This way the mails are traced back to the unsuspecting user. This may not sound so threatening other than consuming a lot of system resources and bandwidth. But if for instance its more than just an mischievous advertisement and an offensive message sent to people. Maybe its not just any ordinary people and it happens to be law enforcement agencies. Then the user would be in deep trouble if the messages are traced back to the personal computer.

Effects on Software Development

Promoting software piracy has a massive impact on software developing companies since they lose a lot of money each year due to illegal distribution of their software. This is not just about the big companies such as Apple, Microsoft or EA who make a huge amount of profit and a huge amount loss as well in certain parts of the globe but relatively their loss is a very small percentage compared to their profits. This concerns the smaller and local companies more and has a huge impact on software development in that region.

Many local companies may shut down leaving jobless software developers. Then these programmers turn to online freelance outsourcing where they develop softwares on the Internet. This has both advantages and disadvantages, but shutting down local companies and forcing programmers to work online has more disadvantages since they focus on the demands made online and may not develop software for the local community at all since their softwares are being pirated as soon as they are developed.

Creating an unethical community

Software piracy is practiced on a large scale in South America, Northern Africa, Eastern Europe and Asia. A lot of countries have strict laws against software piracy as mentioned earlier, so the crackers and such people who violate these laws need protection from the government. They turn to criminal families and gangs who protect them from the law enforcers. Therefore using Pirated software not only is harmful for yourself, but also the community itself creating an unethical group of people who are affiliated with crime families, so the money that is saved by not purchasing the original software may be used against you by these criminals out there.
----
By Shahnawaz Noor Alam
American International University - Bangladesh
----
The Aftermath Publications, Issue 1
----

Wednesday, November 19, 2008

Quantum Physics and Its Beauty

Once a little boy asked his father, “Dad, how big is the sky?” The father replied, “It is of infinite dimensions”. At the first instant the boy found his father’s answer to be totally a ridiculous one. He thought to himself, “How can something be of infinite length, with no beginning and no end?!” But what the father actually tried to mean was to show a simple comparison of how minuscule was he when compared to the vastness of the sky. Now that there are things which physically dominate us, question might arise, how small are we? Or do things smaller than us exist in this universe? One might easily come up with the answer- atom. Others might argue that there are even smaller things in an atom. But how further can we go into the quest of finding the smallest existence in the universe - Quantum physics holds the answer.
It was probably the Greek philosopher Democritus who came up with the idea that matter was made up of the tiny indivisible particles we call atom today. The idea was further modified by the British chemist John Dalton. He seemed to be very much satisfied with the idea that matter is made up of spherical structures called atom. Dalton considered the atom to be more or less like a uni-dimensional point which could not be further divided. But Michael Faraday, in his experiments of electrolysis, showed that atoms gave up or accepted quantized amount of electrical charge. This eventually led to a completely new version of the atomic picture which was known as ‘The Plum Pudding Model’ where the atoms were no more considered to be point structures but spherical volumes of uniform positive charge in which smaller particles called electrons were embedded like the plums in a pudding. The distribution of the electrons was such that, mutual repulsion was in exact balance by the mutual attraction. Thomson’s plum pudding model could explain some basic properties of the atom but failed to explain complex phenomena of multi-electron atoms and the discreteness of the frequencies observed in atomic spectra.

Then in 1911, Ernest Rutherford’s gold foil experiment, more commonly known as the alpha scattering experiment, showed some observations that were in direct contradiction with what J.J Thomson had theorized. According to the experiment, Rutherford put incident alpha particles from a radioactive source on a very thin gold foil and observed that very few of the alpha particles rebounded by angles of 900 or more when incident and most the particles simply passed through with some small deviations in angle whereas J.J Thompson’s theory predicted that all of the positive charged alpha particles should pass through the atom with minor deviations if any. This demanded the evolution of a completely new atomic picture which was later done by Ernest Rutherford in his famous planetary model of the atom where he considered the atom to be mostly a vast empty space in which a very small portion was occupied by electrons and at the center there was tiny nucleus containing the entire positive charge and almost all the mass of the atom.

Rutherford’s model could satisfactorily explain quite complex behaviors of the atom, but had a serious flaw in its basics. It was in contradiction with Maxwell’s electromagnetic theory. Rutherford’s atom model said that electrons revolve around the nucleus like the planets do around the sun in the solar system, which according to Maxwell’s equations should have radiated out electromagnetic energy continuously due to centripetal acceleration. In course of time, the electrons should have exhausted all their energy and eventually would have spiraled down towards the nucleus leading to the ultimate catastrophe of the atom which of course does not happen in the nature (Luckily!). So it turned out that no one could come up with a convincing model of the atom and this gave a hard blow to the scientific community of that time. It was not until the mid 18th century that a young Danish physicist name Niels Bohr came up with a satisfactory representation of the atomic model. He came up with two revolutionary ideas that shook the world of physics.
A simplified version of the postulates set forward by Niels Bohr is:-
1.Electrons in atoms move around the nucleus in certain allowed orbits (which Niels Bohr called energy levels) where the electrons do not loose energy due to centripetal acceleration.
2.Energy of the electrons in the energy levels is quantized i.e. it can be an integer multiple of a basic energy level and there can be no fractional multiple of the basic energy level. In other words, the radius of the path of electron around the nucleus is discrete and cannot be continuous. Energy is given out or taken when an electron moves from a higher energy level to a lower energy level and vice versa respectively.
These two simple yet revolutionary ideas held key to the appropriate understanding of atomic phenomena at the microscopic levels. Little bit Niels Bohr knew that his version of the atomic model would be able to explain many other phenomena in physics that could not be explained by existing theories of that time.
One such unexplained phenomenon of that time was related with the blackbody radiation. The blackbody is one which absorbs radiations of all frequencies that falls upon it, and reflects and transmits none and as a good absorber is also a good radiator, a black body is supposed to be a perfect radiator as well. The intensity of different wave lengths of radiation emitted from a black body appears somewhat likes this:

The Rayleigh-Jeans Law based on classical physics, predicted that the intensity-frequency relationship should be:


I = 8πf2kT/c3

This suggested that at higher frequencies the intensities of radiation will also be high and at a frequency tending to infinity, the intensity will also tend to infinity as well. This is obviously nonsensical and did not agree with the experimental observation. At that time Rayleigh-Jeans Law was the only applicable law for low frequency radiation and could not explain the appearance of this spectrum for greater ranges. This is when quantum physics came to the rescue.
In order to explain the frequency distribution of radiation from a hot cavity (blackbody radiation) Planck proposed the ad hoc assumption that the radiant energy could exist only in discrete quanta which were proportional to the frequency. This would imply that higher modes would be less populated and avoid the ultraviolet catastrophe of the Rayleigh-Jeans Law.

The quantum idea was soon used to explain the photoelectric effect, became part of the Bohr’s Theory of discrete atomic spectra, and quickly became part of the foundation of modern quantum theory.
Photoelectric effect is perhaps the most important phenomenon that could be explained by quantum physics. For a very long time, light and other electromagnetic radiation were thought to be waves. Maxwell and Lorentz had firmly established the wave nature of electromagnetic radiation in electromagnetic theory. Numerous experiments on interference, diffraction, and scattering of light had confirmed it. Then in 1905, Einstein argued that under certain circumstances light behaves not as continuous waves but as discontinuous, individual particles. These particles, or “light quanta,” each carried a “quantum,” or fixed amount, of energy, much as automobiles produced by an assembly plant arrive only as individual, identical cars—never as fractions of a car. The total energy of the light beam (or the total output of an assembly plant) is the sum total of the individual energies of these discrete “light quanta” (or automobiles), what are called today “photons.” Although Einstein was not the first to break the energy of light into packets, he was the first to take this seriously and to realize the full implications of doing so.
Now the question is, what is photoelectric effect? Basically it refers to the emission, or ejection, of electrons from the surface of, generally, a metal in response to incident light. Electrons given off in this way are photo-electrons. Why or more precisely how quantum physics played an important role into explaining this phenomenon? Here is the answer…
According to the classical mechanics, light travels as a wave which means the wave with higher amplitude carries more energy. Bigger amplitude of the light means the light is brighter, so a brighter light will give off more photo-electrons. Is this right? Well actually… No!

Look at these diagrams, which show what happens when light shines on a clean surface of the metal lithium:

















● A dim blue light will make the lithium give off a few electrons, so there will be a small, measurement current.



















● A brighter blue light will give a bigger photoelectric current, because there are more photo-electrons.




















This seems exactly what you would expect… but a light just as bright as the bright blue light, gives off no electrons at all, and the current is zero!
This is impossible to explain – if light is a wave.
Now if we consider that light is made of tiny particles known as photons which were mentioned before, then we can explain the 3 results above:

►The photons of blue light contain enough energy to eject electrons from lithium. A dim blue light has few photons, so few electrons are liberated. This gives a small photo-electric current.
►A bright light has more of these high-energy photons; so more electrons are liberated, giving a larger photo-electric current.
►But red light has more consists of photons that do not have enough energy to emit electrons from lithium.
Even though a bright red light has very many of these photons, not one of them has enough energy to eject an electron.
























Analysis of data from such experiments showed that the energy of the ejected electrons was proportional to the frequency of the illuminating light. This showed that whatever was knocking the electrons out had an energy proportional to light frequency. The remarkable fact that the ejection energy was independent on the total energy of illumination showed that the interaction must be like that of a particle which gave all of its energy to the electron! This fit in well with Planck’s hypothesis that light in the blackbody radiation experiment could exist only in discrete bundles with energy


E = hν



So, does light consist of particles or waves? When one focuses upon the different types of phenomena observed with light, a strong case can be built for a wave picture:














Interference














Diffraction














Polarization



By the turn of the 20th century, most physicists were convinced by phenomena like the above that light could be fully described by a wave, with no necessity for invoking a particle nature. But the story was not over.


PhenomenonCan be explained in terms of waves.Can be explained in terms of particles.
Reflection
Refraction
Interference
Diffraction
Polarization
Photoelectric effect





Most commonly observed phenomena with light can be explained by waves. But the photoelectric effect suggested a particle nature for light. Then electrons too were found to exhibit dual natures.




Most commonly observed phenomena with light can be explained by waves. But the photoelectric effect suggested a particle nature for light. Then electrons too were found to exhibit dual natures.
Quantum physics could also explain the presence of emission spectrum of an atom. Classical physics could not explain the presence of discrete wave lengths in emission spectrum. Because classical physics predicted the presence of spectrum for continues range of wavelengths, but quantum physics said, since electrons in atom can only move from one definite energy level to another, the corresponding frequency of radiation from the atom due to the transition of the atom should also have definite frequency value, which was an exact accordance of emission spectrum obtained for wide range of atoms.
Now how far has quantum mechanics gone? We have seen how quantum mechanics has described black body radiation, photoelectric emission and emission spectrum which were mysteries to the physicist for many years.
So does it explain everything? Even unification!
Unification would be the formation of a law that describes perhaps everything in the universe. It is the quest for one single idea; one master equation. Physicists think that there might be a master equation that can explain all the physical phenomena because through the course of last two hundred years or so, our understanding of the universe has given variety of explanations that are all pointing towards one nugget of an idea that physicists are still trying to find out.

Long before Einstein, the quest of unification began with the most famous accident in the history of science. As the story goes, one day in 1665, a young man was sitting under a tree when all of a sudden he saw an apple fall from above. And with the fall of that apple, Isaac Newton revolutionized the picture of the universe. In an audacious proposal for his time, Newton proclaimed that the force pulling the apple to the ground and the force keeping the moon in an orbit around the earth were actually one and the same. In a moment, Newton unified the heavens and the earth in a single theory, he called gravity. It was a fantastic unification of our picture of nature.

Gravity was the first force to be understood scientifically, though three more would have eventually followed. Although Newton discovered his law of gravity more than three hundred years ago, his equations describing this force made such accurate predictions that we still make use of them today.
Yet there was a problem. While his laws described the strength of gravity with great accuracy, Newton was harboring an embarrassing secret. He had no idea how actually gravity works.

Then in the early 1900’s, an unknown clerk, working in the Swiss patent office would change all that. While reviewing patent applications, Albert Einstein was also pondering upon the behavior of light and little did Einstein know that his musings on light would lead him to solve Newton’s mystery of what gravity is.
At the age of twenty six, Einstein made a startling discovery that the velocity of light is a kind of cosmic speed limit; speed that nothing in the universe can exceed, but no sooner Einstein published this idea, then he found him self squaring off with the father of gravity.
To understand this conflict we have to run a few experiments. Let’s create a cosmic catastrophe. Imagine that all of a sudden, without any warning, the sun vaporizes and completely disappears.
Now what would have happen to the planets according to Newton? Newton’s theories predict that, with the destruction of the sun, the planets would immediately fly out of their orbits. In other words, Newton thought that the gravity was a force that acts instantaneously across any distance and we would immediately feel the effect of the sun’s destruction, but Einstein saw a big problem with Newton’s theory. Einstein knew light does not travel instantaneously. In fact it takes about eight minutes for the sun rays to travel the ninety three million miles to the earth. And since he had shown that nothing, not even gravity can travel faster than light, then how the earth could be released from the orbit before the darkness resulting from the sun’s disappearance reached our eyes. To the young upstart from a Swiss patent, anything outrunning light was impossible and that meant 250 years old Newtonian picture of gravity was wrong.

In his late 20’s, Einstein had to come up with the picture of the universe in which gravity does not exceed the cosmic speed limit. Then after nearly ten years of racking his brain, he found the answer in a new kind of unification.
Einstein came to think of the three dimension of space and single dimension of time. Einstein thought of these dimensions to be somewhat woven together into a fabric of space-time like the surface of a trampoline. This unified fabric is warped and stretched by heavy objects like planets and creates what we feel as gravity. A planet like the earth is kept in an orbit not because the sun reaches out instantaneously and grabs hold of it as par Newton’s theory, but simply because it follows curves in this fabric caused by the sun’s presence.

So with this new understanding of gravity, let’s review the cosmic catastrophe. Now what would happen because of the sun’s disappearance?
The gravitational disturbance that results will form a wave that travels across the fabric in much the same way that a pebble dropped into a pond makes ripples across the surface of the water. So we would not feel a change in our orbit around the sun until this wave reached the earth.

Einstein’s dream was to unify the gravitational force with the other fundamental forces of nature in The Standard Model. But the basics of quantum physics prevented him from doing so. Quantum physics is physics of the micro-world. Quantum physics gives impressive conclusions at atomic levels where the strength of the electromagnetic, strong nuclear and weak nuclear forces is significant. But gravitational force is only significant for large masses and this obviously is not possible within atomic scopes.
Another thing that is quite intriguing in the realm of quantum physics is the “uncertainty” factor involved with it. Quantum physics adds the presence of odds, or probability to everyday outcomes. Things that appear to happen so obviously are actually outcomes with highest probability of occurrence and quantum physics instructs us to appreciate the fact that there are infinite numbers of ways an outcome may appear. A billiard ball is expected to bounce off the walls of the billiard table while traversing a particular path. What quantum physics adds to this scenario is that, the ball has also some chance of going straight through the wall instead of bouncing back, but the probability of this happening is so small that it can be neglected reasonably.

Unifying gravity with the quantum world is proving to be difficult than ever. Einstein disliked the idea of a universe full of uncertainties and chances. Probably it is for this reason he once said, “God does not play dice!” Moreover, the inconsistencies in the strength of the forces in quantum physics and general relativity are delaying any chances of unification even further. This has led to the evolution of new, extremely radical theories like string theory, M-theory and so on but they are still at their infancy and they predict manifestations that cannot be tested by any experiment. We just can hope that one day, mankind will come up with a theory of everything and accomplish The Grand Unification!
----
By Mohammad Atif Bin Shafi
East West University, Dhaka, Bangladesh
----
The Aftermath Publications, Issue 1
----

Time to Turn Super

The word “super” describes something that surpasses human expectations and is unheard of or unimaginable. That is the kind of thing that happened in 1911 when a Dutch physicist by the name Heike Kammerlingh Onnes discovered something that left him awestruck .Onnes’ discovery was one of the greatest breakthrough made in scientific history. In his experiment Onnes had brought down a Mercury wires temperature to nearly absolute zero while passing a steady current and observed that the Mercury seized to show any sort of resistance. Baffled by this astonishing new discovery, Onnes is recalled saying, "Mercury has passed into a new state, which on account of its extraordinary electrical properties may be called the superconductive state". Yup I know, may be not the coolest way to announce the finding of superconductivity but none the less a great one, that opened the doors into the realm of possibilities. Besides this Onnes also conducted another amazing experiment where he passed current through a loop of lead wire at 4 Kelvin and a year later found that the current was still flowing with out significant loss.

The mesmerizing works of Superconductors might be known to a lot of people but not very much is known about the enigmatic phenomenon encompassing superconductive behavior. A basic definition for superconductive behavior would be that it’s a phenomenon observed in metals and ceramics (inorganic non metallic material which are formed by the action of heat) when they are cooled to near absolute zero temperature. Today however this definition is changing with the discovery of superconductors of Fullerene that exist in molecular level when 60 carbon atoms join to form a closed sphere and are doped with alkali metals.

It was not before long that the science world was trying to explain Onnes super discovery. In 1933 however Walther Meissner and R. Ochsenfeld made another breakthrough discovery. They saw that superconductors are not only perfect conductors
but also had the property to exclude a magnetic field. What it meant was that a superconductor will not allow magnetic field to pass through its interior. This interesting phenomenon occurs because the superconductor it self produces a current which in turn produces a field that just about balances the field outside. The Meissner effect is the main cause behind magnetic levitation.

Later in 1957 three physicists of Illinois University John Bardeen, Leon Cooper, and Robert Schrieffer presented the world with their BCS theory (after their last names). BCS theory suggests that as electrons pass trough the crystal lattice it gets deformed inwards as a result of which packets of sound called phonons are produced.





These phonons create a trough of positive charge at the areas of deformation as result assisting the movement of the following electrons in a process known as phonon mediated coupling and these couples of electrons are called ‘Cooper pairs’. The BCS theory was to be followed by yet another astounding break through. Brian Josephson a graduate student at Cambridge University predicted that two superconductors could conduct electricity even if there is an insulator between them. His predictions were later confirmed and the tunneling phenomenon is today known as the ‘‘Josephson Effect’’. These discoveries were subsequently followed by the prediction of organic superconductors and later their synthesis. But what happened in 1986 was probably one of the most crucial turning points in superconductor history. Alex Müller and Georg Bednorz , researchers at the IBM Research Laboratory in Rüschlikon, Switzerland, created a brittle ceramic compound that superconducted at the highest temperature then known: 30 K. Thus superconducting was no more tied to the feet by restrictions of temperature. Scientists began to test with what ever they found and cooked up any new recipes that they thought could be a superconductor at higher temperatures. Thus today the world record for the Superconductor with highest transition temperature (Tc) is held by a thallium-doped, mercuric-cuprate comprised of the elements Mercury, Thallium, Barium, Calcium, Copper and Oxygen. The Tc of this ceramic superconductor was confirmed by Dr. Ron Goldfarb at the National Institute of Standards and Technology-Colorado in February of 1994. Under extreme pressure its Tc can be coaxed up even higher - approximately 25 to 30 degrees more at 300,000 atmospheres.
The vast arrays of features make superconductors engineering marvels. Superconductors are being used in many different fields these days form medicine to space programs. Magnetic-levitation is an application where superconductors perform extremely well.

Transport vehicles such as trains can be made to "float" on strong superconducting magnets, virtually eliminating friction between the train and its tracks. The first of its kind the Yamanashi MLX01 maglev train opened on April 3, 1997. In December 2003, the MLX01 test vehicle attained an incredible speed of 361 mph (581 kph).

Who knew that the superconductors could also be life savers?

The field of Biomagnetism gives these babies a chance to show their magnetic life saving skills. The field generated by a superconductor is used in MRI (Magnetic Resonance Imaging).The super conducting field excites hydrogen atoms present in water and fat molecules of the body. They then release this energy at frequency that is detected by a computer and thus an image is created. A Korean group has taken Biomagnetism a step further by developing a device called the SQUID (Superconducting Quantum Interference Device). The SQUID makes use of the “Josephson Effect” and can detect changes in magnetic field over a billion times weaker than the force that moves the needle on a compass.

The uses of superconductors don’t just stop there; superconductors could be big money savers in the field of electricity too. Electric generators made with superconducting wire are far more efficient than conventional generators wound with copper wire. In fact, their efficiency is above 99% and their size about half that of conventional generators.

So how about it isn’t superconducting one of the greatest gifts of all from God to us humans. The best part is that currently as you read this some scientist somewhere might be finding out some thing absolutely new about the magnificent superconductors.
----
By Tahsin Uddin Mullick
North South University, Dhaka, Bangladesh
----
The Aftermath Publications, Issue 1
----

The Charged Coupled Device

It was just a few years back when an average Joe used to buy tonnes of rolls of films from the nearest store to replenish the stomach of his ever hungry camera so that he could at least capture his most priced moments. The next session comprised of dissecting the camera to retrieve the film and even after such a hassle, our average Joe ended up with two to five slots in the film roll badly developed due to inappropriate synchronization of camera functions.

But everything has changed now. Recent developments in the imaging and photo sensing industry have made the camera more efficient and functional than it was any time before. State of the art technology and relentless research has led to the evolution of amazing products starting from the meagerly sized cameras in spy-bots and mobile phones to the gigantically sized cameras incorporated into modern telescopes. With the digitization of cameras and electromagnetic radiation sensors, optical gadgets have become fast, sensitive and accurate to the pin point. Now our average Joe is happier than ever. He now knows that whatever he snaps is perfectly stored in his camera with superfluous clarity. Almost all the overwhelming features of a modern camera are the result of a minuscule yet stupendously important invention called the Charged Coupled Device (CCD).

The design of the first CCDs was probably made by William Boyle and George E. Smith in 1969 at the famous AT&T Bell Laboratories. The essence of their design was the ability to transfer charge along the surface of a semiconductor. Although initially it was made as a memory device, but later it turned out to be a good imaging device owing to its sensitivity to light. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize for their work on CCD.



A CCD for UV Detection

Before understanding the structure of a CCD it is important to know how semiconductors work in the microscopic level. A semiconductor is a material made up of atoms with some electrons which are more tightly held than those in metals, but less tightly held than those in non metals. Raising the temperature of a semiconductor causes the electrons to shake loose due to increased internal energy and thus the conductivity of a semiconductor increases with increase in temperature.

Another way of increasing the conductivity of a semiconductor (like Silicon) is to add minute controlled amounts of impurities. If, for example, elements which have three valence electrons like Aluminum or Indium are added to Silicon, as in figure A, the gap or ‘hole’ is filled by an electron(to make the outer shell structure of Aluminum stable) which might be moving in the silicon lattice. However, the electrons move from atom to atom and an electron which moves into the ‘hole’ must leave a ‘hole’ in the atom it has come from. Therefore the positive ‘hole’ appears to move through the material in the opposite direction of the electron movement. The addition of the impurity increases mobility of charge carriers in the semiconductor, thereby increasing conductivity. This is what a “P-TYPE” semiconductor is:


“N-TYPE” semiconductors contain impurities like Phosphorus and Arsenic which have five valence electrons. These atoms form covalent bonds with four surrounding atoms of silicon. One of the valence electrons of the impurity atom is held very loosely by the protons and detaches very easily by thermal agitation. These electrons from several atoms of the impurity contribute to increased conductivity of the semiconductor as a whole.


When a “N-TYPE” and a “P-TYPE” semiconductor are joined, a barrier potential is developed at the junction preventing majority of the electrons in the “N-TYPE” semiconductor from diffusing into the “P-TYPE” semiconductor.(Further explanations of the mechanisms involved with barrier potential in joined semiconductors is out of the scope of this writing).

In contemporary cameras (like modern camcorders or digital still cameras), the film is replaced by the CCD. The CCD is essentially an array of optical detectors (or a type of microchip) that forms a photographic image using digital processing. It consists of a silicon wafer (the major portion of which is composed of the N -TYPE silicon mounted on the P-TYPE silicon) divided into an array of small regions called picture elements, more commonly known as pixels. Each pixel consists of three small electrodes separated from the N-TYPE semiconductor by a very thin layer of Silicon Dioxide insulator. Initially the potential of the center electrode of each pixel is maintained at +10V and the other two are maintained at +2V. Incident photons of light from the object being photographed create electron hole pairs in the N-TYPE semiconductor by photoelectric phenomenon. So the image focused on the chip becomes an identical pattern of electrons in the semiconductor. These electrons are attracted by the +10V electrode and are held against the layer of silicone dioxide under the electrode.

Each electrode is connected to one of the three voltage terminals which provide repeated +10V pulses in three phases. The effect is to shift the charge gathered under one electrode to the next electrode and then to the next one until they arrive at the output electrode. Thus the output electrode produces a stream of pulses.


A Detailed Structure of a CCD

The main advantage of the CCDs is that, CCDs are very much efficient. Photographic film uses less than 4% of the photons reaching it. CCDs can make use of about 70% of the incident photons due to greater sensitivity of pixels than photographic chemicals/grains. (Photoelectric phenomenon is instantaneous whereas chemical reaction is relatively time consuming). This explains the extensive use of CCDs in astronomical telescopes where sensitivity is a prime issue.


An Array of CCDs In a Digital Sky Survey Telescope


Being more sensitive means that CCDs require much less exposure time than photographic films. But photographic films do have the advantage that light sensitive grains are smaller than CCD pixels and so they can produce images with better resolution.

The CCD is a more or less new invention with developments on it still at their infancy. The potentials of the device are far reaching and I believe we are yet to unleash the real power the CCD has to offer. Ongoing research on the CCD technology is likely to make it more useful, efficient and widely available in the near future.
----
By Mahmud Hasan
----
The Aftermath Publications, Issue 1
----

Why Learn C? Read Me!

There are a lot of programming languages out there now: from the high level (xCode, Visual Basic) to the low level of assembly languages. Nevertheless, its best to learn to program in C as a first programming language because firstly, C has been around for 30+ years, and there are so many sources to learn from and fool around with (Sourceforge for example is a bank of code waiting to be looted). Moreover, thousands of sites and tutorials are available so that you can teach and practice by yourself.

The advantage of C is that it's almost English, it's like the English you'd use to IM! C expresses ideas of programming in terms you can understand (with a bit of effort). Furthermore, the principles of C are used in almost all other language, so you will be setting the basis for learning a lot of other languages. Essentially, by learning C, you are learning programming itself. Even though C is like English, it's actually French! You'd think it was made for you but it was actually made for the machine. You are able to work with pointers, bytes and even with every bit of data so that you can optimize memory usage and performance of your program. An when you'd involve yourself with advanced topics such as operating systems, C would come in handy for you to understand the intricacy of things such as networking and data management. And if you like the science of it all learning and using C will be a hell lot of fun.

Fun programming is a nick name for C, using it you can write everything from web applications to pretty games. And if you need further persuasion, here is something that will surely persuade you, C was used to write Unix: the mother of all (good) operating systems. And it was possibly used to write Microsoft Windows as well.

By now I hope I have got you all fired up to dive into C, well get your suit and type in the following addresses:
The GNU C Programming Tutorial: the “open source” book.

Sourceforge: the “open source” bank, rob it!

Code::Blocks: one of the best open source IDE ever.

That's all you need, along with a good amount of motivation and jokes:
Once a programmer drowned in the sea. Many Marines where at that time on the beach, but the programmer was shouting "F1 F1" and nobody understood it.

Why all Pascal programmers ask to live in Atlantis?
Because it is below C level.

Programming is like sex: one mistake and you have to support it for the rest of your life.

A project manager, a computer programmer and a computer operator are driving down the road when the car they are in gets a flat tire. The three men try to solve the problem.
The project manager said: "Let's catch a cab and in ten minutes we'll reach our destination."
The computer programmer said: "We have here the driver's guide. I can easily replace the flat tire and continue our drive."
The computer operator said: "First of all, let's turn off the engine and turn it on again. Maybe it will fix the problem."
Suddenly a Microsoft software engineer passed by and said: "Try to close all windows, get off the car, and then get in and try again."
----
By Kowsheek Mahmood
Ryerson University, Toronto, Canada
----
The Aftermath Publications, Issue 1
----