What is ASCII?
ASCII stands for the American Standard Code for Information Interchange, and is pronounced with a hard "C" sound, as ask-ee. As a standard, it was first adopted in 1963 and quickly became widely used throughout the computer world. It's used as a way of defining a set of characters that can be displayed by a computer on a screen, and includes some control characters that have special functions.
Basic ASCII uses seven bits to define each letter, meaning it can have up to 128 specific identifiers, 27 power. This size was chosen based on the common basic block of computing, the byte, which consists of eight bits. The eighth bit was often set aside for error-checking functions, leaving seven remaining for a character set.

There are 33 codes in ASCII that are used to represent things other than specific characters. The first 32 (0-31) represent things ranging from a chime sound, to a line feed command, to the start of a header. The final code, 127, represents a backspace. Beyond the first 31 bits are the printable characters. Bits 48-57 represent the numeric digits, bits 65-90 are the capital letters, and bits 97-122 are the lower-case letters. The rest of the bits are symbols of punctuation, mathematical symbols, and other symbols such as the pipe and tilde.

ASCII began in theory as a simpler character set, using six rather than seven bits. Ultimately it was decided that the addition of lower-case letters, punctuation, and control characters would greatly enhance its usefulness. Not long after its adoption, there was much discussion about possible replacements and adaptations of the code to incorporate non-English and even non-Roman characters. As early as 1972, an ISO standard (646) was created in an attempt to allow a greater range of characters. A number of problems existed with ISO-646, however, and it was left by the wayside.

The current leading contender for replacing this standard is the Unicode character set. It allows for essentially unlimited characters to be mapped by using collections of bytes to represent a character, rather than a single byte. The first byte of all Unicode standards remains dedicated to the ASCII character set, however, to preserve backward compatibility.
The standard is also sometimes discussed in reference to ASCII art. This describes the use of the basic character set to create visual approximations of images.
AS FEATURED ON:
AS FEATURED ON:









Discussion Comments
@pleonasm - People still do that, you know. Chat rooms don't want to be bogged down by a lot of image files that take a long time to load. ASCII art takes almost no time to load because it's so simple.
And there are some amazing creations out there. People who have managed to make photo realistic art using only ASCII characters.
Of course, there are also programs now which will do that for you. Once you break down each character so that it's just a way to make the screen lighter or darker, it's probably not that hard to make a program that would translate photos into ASCII art fairly quickly.
When you think about it smilies and things like that could also be called ASCII art, so we still use them all the time.
I didn't realize that's how you say ASCII. I've always thought it was said "ass-see" which probably shows how often I've ever actually talked to someone about it. Because, yeah, I did realize it was something to do with programming and the way computers see letters, but I didn't really think about it much.
I have seen a lot of ASCII art though, as it still seems to be fairly popular. I can remember when it was the main way that people traded images on the computer. People would give each other ASCII roses in chat rooms, for example. Ah, the good old days.
Post your comments