Open Means Open Means

Graphics card

No comments on “Graphics card”
Graphics card
diamond-viper-ati-radeon-x1650-pro-graphics-card1
From the movies we watch to medical sciences and imaging, defence purposes and even manufacturing processes (Computer Assisted Manufacturing) the impact that 3D rendering has on the world is phenomenal. 3D acclerators (graphics cards) have acted as catalysts for the growth of 3D modeling, rendering, and animation.
Nomenclature of graphics cards
The year 1981 and the first PC video card from IBM seems ancient history.We have gone from single colour video cards to cards that can work with millions of colours. The real graphics war for desktop Pc's didn't start until 1997 when 3DFX delivered voodoo to the world with then powerful features like mid-mapping,z-buffering and antialiasing. Rivals NVIDIA came up with their TNT and TNT cards following the Voodoo 2, and these were first AGP based cards.
Let's move on to NVIDIA's Geforce 256 graphics card on August31,1999). It essentially had 4 pixel pipelines that could handle a single, half precision pixel operation in a clock. Core and memory speeds were 120/166 MHz and with a peak output of 480 million pixels per second and 15 million triangles per second, the geforce 256 was hard to beat. November 8,2006,seven scant years later and the geforce family's eight generation of graphics cards were unveiled.their latest 8800GT,which is a very value oriented card,churns out 16.8 billion triangles per second,a difference compared to the sun and the moon. But there are much common things common with graphics cards.
GPU Facts
images
In graphics cards GPU (Graphics Processing Unit) is at the heart of its technology. Think of the GPU as a processor dedicated solely to graphics.Just as a CPU slots into a motherboard, A gpu is affixed on the PCB (Printed Circuit Board) of the graphics card. Unlike a CPU a GPU can never br removed from its PCB. Today's GPU's have the ability to number crunch enormous amount of data and are more powerful, even than the fastest Quad core. NVDIA's 8800GTX GPU, for example has 128 streaming processors inbuilt and each is capable of handling a thread of data. The 8800gtx has 681 million transistors on its 90nm die.Increases in performance have gone hand in hand with increases in programmability with DX9.0c,giving way to DX10 and finally DX11.
The characteristic properties of a GPU mainly depends on two variables:
1.The core speed:
This is the rated frequency in (MHz or GHz etc.). A general thumb rule is the faster the core, the faster the GPU.
2.Pixel shaders and Vertex shaders:
A vertex shader basically takes a 3D image made up of vertices and lines and draws a 2D image out of it.,which is displayed in the form of pixels in the screen. Pixel shader is the refinement of the vertex shader. So there are two ways to increase the performance of a graphics core-increase the clock speed or increase the count of shader units.
How does the core works?
Stage 1:Input
The GPU is designed to work simple vertices and for this purpose every complex shape ina 3D scene is first broken down into triangles-the basic building blocks of any 3D model irrespective of how complex.All shapes like rectangles,cubes and even curved surfaces are broken down into triangles. Developers will use either of the computer graphics library software (OpenGl or Direct3D) to feed each triangle into the pipeline,but one vertex at a time.
Stage 2 :Transformations
GPUs have their own localised coordinate system and they can use this to specify the position of an object in a 3D cutscene. For utilising their own coordinate system the GPU has to convert all the objects to a common tracking system. At this time only simple operationslike scaling and rotation can be done. The output is a stream of billions of triangles.
Stage 3 : Lighting
Each triangle is placed in the Global coordinate system and now the GPU can calculate each triangle's colour on the basis of lights in the scene. Advanced lighting techniques like spectacular highlighting, diffusion and occluded lighting are also possible.
Stage 4 : Getting perspective
The next stage of the pipeline projects are the coloured triangles onto a virtual plane, as viewed from the users perspective. These coloured triangles and their respective coordinates are finally ready to turned into pixels.
Stage 5 : Rasterization
Rasterization is a process of converting a vertex representation into a pixel representation. Here the image from the vector graphics format is converted into a raster image consisting of dots and pixels. In this stage each pixel can be treated seprately and the GPU handles all the pixels in parallel.
Stage 6 : Texturing
While pixels are already coloured by this stage sometime additional textures may be needed for added realism. This is a cosmetic process similar to the makeup used by ramp models, in which the image is further draped in additional textures to add an additional layer of detail and beliviability.These textures are stored in the GPU's memory.
Stage 7 : Hidden Surfaces
In all 3D scenes,some objects are clear while some objects are obscurred by others. Its not simple as writing each pixel to memory.Gpu does a check to see wheather the pixel occupying that position is the closest to the user and it is sent to the monitor.

ARTIFICIAL INTELLIGENCE

No comments on “ARTIFICIAL INTELLIGENCE”
Computer vision The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use. Expert systems A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered

Advantages Of Metro Ethernet over T1 or T3 (DS3) Lines

No comments on “Advantages Of Metro Ethernet over T1 or T3 (DS3) Lines”


More and more business operations of modern days rely on the Internet connectivity. The resource hungry technologies that play an integral role in today’s businesses have made it inevitable to embrace the newer and more advanced methods of connectivity between operational sites. The speed of business and the reliability of data transfers have become most important factors of business operation thus requiring the implementation of better connectivity modes over WAN.  

Though a large amount of carrier bandwidth is available at decent prices, the metropolitan connectivity is affected by last mile delay. This has a significant impact on the overall operation of the WAN. An easy and cost effective transition to Metro Ethernet could address this issue. Metro Ethernet is a carrier Ethernet technology that is built in accordance with the WAN (Wide Area Network) implementation requirements and at the same time compatible with Ethernet used by the end systems. This Carrier Ethernet technology is provisioned for MAN (Metropolitan Area Network) connectively and hence the name Metro Ethernet.  

The main advantage of the Metro Ethernet over a T1 or T3 line is the carrier speed. Generally, T1 lines run at a fixed speed of 1.5 Mbps, which could further be increased by bonding several lines together to a maximum speed of 12Mbps. While better speeds could be achieved by bonding more T1 lines, it turns out to be very expensive by the time 10Mbps speed is achieved. A better and cost effective solution would be a T3 (Ds3) line with about 45Mbps over a fiber-optic cable. Metro Ethernet when provisioned over a fiber optic cable could provide a flexible variation of speeds equivalent to 10Mbps (Ethernet), 100Mbps (Fast Ethernet) and 10,000Mbps (Gigabit Ethernet). With such flexibility, Metro Ethernet further makes it desirable with lower costs in comparison to lease lines with T1 or T3 as the Metro Ethernet providers tend to have nationwide backbones and local fibers dedicated to the metropolitan areas.  
Apart from the high speed advantages, Metro Ethernet also provides better and cost effective solutions to lower bandwidth requirements with the Ethernet over Copper (EoC). As the name suggests, the Ethernet over Copper is similar to T1 lines provisioned over multiple copper pairs but with a different modulation technology. The unique modulation of the EoC aids a more efficient packet transmission over lesser distances. Ethernet over Copper lines can be obtained at different transmission rates from 1Mbps to 50Mbps, generally charged per Mbps.  

Though the Metro Ethernet lines can be obtained and operated at much lesser costs than T1 and T3 lines with equivalent speeds, it could be more profitable to keep T1 lines in a place where Metro Ethernet lines are yet to venture especially for lower speed requirements. This is due to the face that the availability of Metro Ethernet connectivity is limited to availability of fiber-optic cables while the T1 and T3 connectivity could be achieved anywhere with a valid phone line. However, if a higher bandwidth is required, Metro Ethernet could be very effective both in the sense of total cost and availability.  

Another advantage of the Metro Ethernet would be the availability as other lease line connectivity modes like T1 and T3 are dependent on the phone lines provided but the telecom companies. Though it is rare to experience a downtime in lease line connectivity, any little down time could be disastrous to a large business or even a time sensitive small business.  

Hence a time sensitive and bandwidth hungry business would need Metro Ethernet connectivity, especially when the existing T1 and T3 lease line maintenance costs are crawling upwards due to bundling of lines.  




Mobiles or Cell phones

No comments on “Mobiles or Cell phones”

http://www.techlivez.com/wp-content/uploads/2008/04/windowslivewriternokianewmobiles6600slide6600foldand3600s-13476new-mobiles4.jpg

 

Mobiles:

Mobiles are many the source or media for communication. Mobiles are also called as cell or cellular phone,

Why mobiles are called as Cell or Cellular phone?

Most of the people know that mobiles are also called as cell or cellular phones but very few of them are know why they are called so.

Mobile communication technology works on radio frequencies. You might have seen the mobile towers have been place from place to place. These towers receive and send signals to the desired destination. Now the important thing. these towers are place in such a fashion that they form a pentagonal shape. They are placed in pentagonal shape because it helps proper distribution of signal, without any ditortion. This pentagonal shape or more than one pentagonal shape forms on cell. As the mobile works on these cells it is called Cellular phone.

Nowadays there are hudge cells which covers one small city. And one cell is very well linked with other so we do not face problems like call drop, No signal etc.

 

The Future Computer

No comments on “The Future Computer”

The major factor in determining the quality  of a computer is the speed it can manage.  All the information is stored in the hard disk  from where it is transfered to RAM and then to cache , which is then used by the processor. Hard disk is slowest of all and it is a major limiting factor for the computer.

Imagine a computer where the hard disk is as fast as RAM. Then there is no use of a seperate RAM. Istead the information is transfered from the "Hard Disk cum RAM" to cache directly. Do you think this is not possible or that it will be expensive? No. Thanks to IBM, a new type of Hard Disk called "The Millipede" will become our new future. Not only this storage media is fast(faster than our current speeds of RAM) it is tiny too. Within an area of 1 inch square it can store information upto 1 terabytes. Can you imagine the genius of this creation? No more bulky 1kg harddisks which can atmost provide 500gb. No more RAMs whose max capacity is about 4GB. Instead we get a Device which is both a hard disk and a RAM whose size is a square inch and whose capacity is in terabytes. This will make the size,power consumption, weight ,speed atleast 10 times better than our normal computer.

This technology will most probably come out with in the next 5 years and with it we will enter the next generation of computers.

More Articles …

  1. Microprocessors
  2. Importance of Internet
  3. Artificial Intelligence
  4. The technique of computer system analysis

Subcategories

Hardware & Troubleshooting

Software

PC & Internet Security

New Technologies

Web Hosting

Web Hosting is a service offered by web hosting providers to the individuals and organizations to make their websites accessible on the internet. Depending on the requirement, one can avail different types of web hosting such as shared hosting, dedicated hosting, virtual private hosting, cloud hosting etc.

  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179

Page 175 of 193

  • About Us
  • Faqs
  • Contact Us
  • Disclaimer
  • Terms & Conditions