PDA

View Full Version : bit and Bytes 101



lostsoul62
13-03-2011, 05:02 PM
Everyone knows that there are 8 bits to a Byte. It's been 30 years since I went to school for this so here it goes. A modem like dial up there is 10 bits to every Byte sent and I guess it is that way sending data over the Internet now a days. On a hard drive you have to have a bit between every Byte so I'm thinking is a total of 10 bits for every Byte or did they change something in the last 30 years?

nerd
13-03-2011, 05:22 PM
Not, it's still 8 bits to a bite I believe, but last I read up on it was years ago.

Snorkbox
13-03-2011, 05:26 PM
Definitely 8 bits to a byte. A bite, however, is somewhat different.

Erayd
13-03-2011, 05:35 PM
Everyone knows that there are 8 bits to a Byte. It's been 30 years since I went to school for this so here it goes. A modem like dial up there is 10 bits to every Byte sent and I guess it is that way sending data over the Internet now a days. On a hard drive you have to have a bit between every Byte so I'm thinking is a total of 10 bits for every Byte or did they change something in the last 30 years?Still 8 bits to a byte - that's never changed, and there's no such thing as a 10-bit byte - but different word lengths have been used for various things over the years (e.g. 7-bit ASCII).

I'm not quite sure what you're on about with the modems - are you talking about the error-correction overhead? If so, this has nothing to do with the size of a byte.

If error-correction wasn't what you were talking about, can you please clarify what it is that you're actually asking? Your original post was somewhat vague.

pcuser42
13-03-2011, 05:58 PM
There is a difference between a decimal kilobyte (1000 bytes to a kilobyte) and a kilobyte (1024 bytes to a kilobyte, or kibibyte) if that's what you're after...

jwil1
13-03-2011, 06:43 PM
4 bits are called a nibble :D

feersumendjinn
13-03-2011, 07:00 PM
And 2 bits is a quarter (http://en.wikipedia.org/wiki/Bit_(money)), eh SJ.;)

Paul.Cov
13-03-2011, 09:40 PM
Yeah, I believe he's talking about transmission and storage overheads / formatting data, etc. like cluster / sector and addressing info on HDDs and checksums and addressing/routing info with transmitted data.

I don't know that there is a fixed ratio of these bits per byte used - there will be reasonably typical estimates of this overhead, but I'd be totally clueless as to their proportions.
In the bad old days of earlier storage media I believe the proportion of stored bits to total bits used was about 3:5
eg a 1.44MB floppy had about 2.5MB actually written to its surface. The drive firmware or the OS took care of the addressing stuff, leaving us to see the remaining 1.44MB of storage space.

MushHead
13-03-2011, 09:49 PM
Although you do come across common transmission schemes where the overhead adds up to 10 bits (raw serial is often 1 start bit, 8 data bits [or 7 plus parity] & 1 stop bit), a byte is always 8 bits. With the modulation & compression you find even on quite old modem standards, 10 bits per byte wouldn't accurately describe what went over the wire by any means, but might describe what happens prior to modulation & after demodulation.

The French got this one right, as they use octet, which makes it clearer.

lostsoul62
14-03-2011, 02:36 AM
I know this is a techie question and only about one and ten might know it. As I tried to say 8 bits = 1 Byte and everyone knows it. Everyone knows that there 1 kilobit = 1024 bits. When you send data anywhere you will be sending 9+ bits for every Byte and I guarantee that. Are we on the same page so re read my original post?

pctek
14-03-2011, 07:46 AM
On a hard drive you have to have a bit between every Byte

Really? Hard disk drives are sealed system containing one or more magnetic discs inside. Each side is simply called side or head, because for each side there is a magnetic head available for reading and writing data. Each side of a magnetic disc is divided into several concentric tracks or cylinders. Then each track is divided into sectors. Each sector holds 512 bytes of information. The minimum unit the hard disk drive controller can access is the sector, meaning that if it has to read just one byte from a given sector, it must read the entire sector.

The number of bytes inside a sector is fixed, it is always 512 bytes. But the number of tracks, sectors per track and sides (i.e., heads) a hard drive has will depend on the model. The number of heads, tracks and sectors per track a hard disk drive has is called geometry.

If you multiply the number of heads by the number of tracks and then by the number of sectors per track you will find how many sectors a given hard disk drive has (for newer hard disk drives the manufacturer announces the number of sectors the drive has, instead of its geometry). Multiplying this number by 512 will give you the total capacity of a hard disk drive in bytes.

MushHead
14-03-2011, 08:25 AM
I think what lostsoul was referring to was the physical bit layout on the disc surface. On the ground, so to speak, there are sector markers, ECC bits, etc. Plus the whole disc will be laid out with some sort of RLL encoding, as there has to be a polarity change in the signal read by the heads every so often. The end result would probably be more than even 9 or 10 bits per byte.

So basically whenever you have to encode a sequence of (always 8-bit) bytes onto some sort of physical medium, you're going to have encoding overhead - often mitigated by some compression scheme running in parallel.

Erayd
14-03-2011, 09:27 AM
The number of bytes inside a sector is fixed, it is always 512 bytes......unless you're talking about one of the newer drives which use a 4KB sector size. This also doesn't take into account operational overhead (error-correction etc) as MushHead has mentioned.

dugimodo
14-03-2011, 02:31 PM
really not sure what you are asking, perhaps you could make it clearer.

yes 8 bits = 1 byte, and if you send any extra data such as crc etc it would either be contained within that 8 bits or more likely sent as part of the overhead as specified by whatever standard used. There would not be any random bits floating around, they would be formated into bytes themselves.

The kilobyte thing is much less clear, it has part of it's roots in the fact that kilo, mega, tera etc are Decimal terms that were "borrowed" for use in the computer world. As such a kilobyte CAN be defined as either 1024 (the proper binary number) or 1000 bits (the literal translation of the word). Marketing people obviously chosse the one that sounds bigger.

The move to use such terms as KiB etc to differentiate is a relatively new thing that hasn't been universally adopted.

The old floppy is the worst of it as it uses a combination of the two. I forget the exact details but it's something like 1400 kb - ie' decimal amount of binary Kb

ManUFan
14-03-2011, 03:51 PM
Way back when I was a technician for Telecom data was sent using the ISO 7 layer model... (probably still is). The model is... (from layer 1 to 7 as below)

Physical
Link
Network
Transport
Session
Presentation
Application

Depending on which type of encoding was used there were varying 'packet' lengths. Packets included such information as addressing, error correction (CRC), and of course the data. Then there was the interleaving of packets, rerouting to avoid losses.... redundant paths, etc - quite complicated (and this was 20 odd years ago!) I used to be able to decipher packet streams on the fly when monitoring a connection !!! - no way now (LOL).

So I agree - usually 9+ bits per packet (min)... Unless we are talking telex (whats that you ask? - Google it) where they used 6bits + 1 start & 1.5 stop bits......... and +/- 80 volt signals! - would give you a decent thump if you weren't careful....

Memories...............