A Windows 98 & ME forum. Win98banter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » Win98banter forum » Windows 98 » General
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

"16 bit" and "32 bit" code?



 
 
Thread Tools Display Modes
  #1  
Old October 21st 05, 12:01 AM
Bill in Co.
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just mean
the operand)? What exactly? (sorry, I must be having a blanked out mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win 3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).


  #2  
Old October 21st 05, 12:24 AM
Richard G. Harper
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

16-bit code is compiled to run on a 16-bit processor and operating system
(Windows 3.1, WFW, DOS; Intel 8086/80186 and 80286 processors). 32-bit code
is compiled to run on a 32-bit processor (anything 80386 and above, Windows
95 and above). The number of bits refers to the data bus width of the
processor. As you can run 32-bit Windows on a 64-bit processor, you can
also run 16-bit Windows on a 32-bit processor.

--
Richard G. Harper [MVP Shell/User]
* PLEASE post all messages and replies in the newsgroups
* for the benefit of all. Private mail is usually not replied to.
* My website, such as it is ...
http://rgharper.mvps.org/
* HELP us help YOU ... http://www.dts-l.org/goodpost.htm


"Bill in Co." wrote in message
...
Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just mean
the operand)? What exactly? (sorry, I must be having a blanked out
mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win
3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).




  #3  
Old October 21st 05, 02:46 AM
Bill in Co.
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

OK, so it's the data bus. Thanks for that info.

I wonder how (or if?) that is somehow related to the fact that only the 386
and above could *address* huge amounts of memory, w/o being so restricted by
that old address segment register?

If I remember right, there were 64K segments (two bytes), prior to the 386,
which was pretty limiting for most code. Is there a tie in here (with the
larger data bus of the 386, and the large addressing capability of it too,
using its new, larger segment), or is that just a coincidence?

Richard G. Harper wrote:
16-bit code is compiled to run on a 16-bit processor and operating system
(Windows 3.1, WFW, DOS; Intel 8086/80186 and 80286 processors). 32-bit

code
is compiled to run on a 32-bit processor (anything 80386 and above,

Windows
95 and above). The number of bits refers to the data bus width of the
processor. As you can run 32-bit Windows on a 64-bit processor, you can
also run 16-bit Windows on a 32-bit processor.

--
Richard G. Harper [MVP Shell/User]
* PLEASE post all messages and replies in the newsgroups
* for the benefit of all. Private mail is usually not replied to.
* My website, such as it is ...
http://rgharper.mvps.org/
* HELP us help YOU ... http://www.dts-l.org/goodpost.htm


"Bill in Co." wrote in message
...
Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just

mean
the operand)? What exactly? (sorry, I must be having a blanked out
mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win
3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).



  #4  
Old October 21st 05, 09:04 AM
Franc Zabkar
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

On Thu, 20 Oct 2005 19:24:14 -0400, "Richard G. Harper"
put finger to keyboard and composed:

16-bit code is compiled to run on a 16-bit processor and operating system
(Windows 3.1, WFW, DOS; Intel 8086/80186 and 80286 processors). 32-bit code
is compiled to run on a 32-bit processor (anything 80386 and above, Windows
95 and above). The number of bits refers to the data bus width of the
processor.


Just a nit ...

That would be the internal data bus width. Pentium class processors
have a 64bit external data bus.

As you can run 32-bit Windows on a 64-bit processor, you can
also run 16-bit Windows on a 32-bit processor.


-- Franc Zabkar

Please remove one 'i' from my address when replying by email.
  #5  
Old October 21st 05, 09:04 AM
Franc Zabkar
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

On Thu, 20 Oct 2005 17:01:36 -0600, "Bill in Co."
put finger to keyboard and composed:

Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just mean
the operand)? What exactly? (sorry, I must be having a blanked out mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win 3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).


Hint: If "16 bit" referred to the width of the address bus, then the
processor's external address space would be restricted to 64K.

-- Franc Zabkar

Please remove one 'i' from my address when replying by email.
  #6  
Old October 21st 05, 11:30 AM
Richard G. Harper
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

Nope, no coincidence at all. The 80386 processor unlocked the ability to
use large amounts of memory, larger hard drives, and introduced the first
"virtual" processor mode that allowed the first use of multitasking in the
operating system.

--
Richard G. Harper [MVP Shell/User]
* PLEASE post all messages and replies in the newsgroups
* for the benefit of all. Private mail is usually not replied to.
* My website, such as it is ...
http://rgharper.mvps.org/
* HELP us help YOU ... http://www.dts-l.org/goodpost.htm


"Bill in Co." wrote in message
...
OK, so it's the data bus. Thanks for that info.

I wonder how (or if?) that is somehow related to the fact that only the
386
and above could *address* huge amounts of memory, w/o being so restricted
by
that old address segment register?

If I remember right, there were 64K segments (two bytes), prior to the
386,
which was pretty limiting for most code. Is there a tie in here (with
the
larger data bus of the 386, and the large addressing capability of it too,
using its new, larger segment), or is that just a coincidence?



  #7  
Old October 21st 05, 11:58 AM
Jeff Richards
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

You description is quite correct - the operating systems are associated with
those terms. But it is very much a terminology that was used by Microsoft
to distinguish between the two generations of OS, and doesn't necessarily
have a solid basis in the technology.

The 16- or 32-bit reference is to the size of the data element used to
reference handles, or object identifiers, in the Windows API. In 16-bit
Windows most parameters (including the critical handle parameter) were
passed as 16 bit numbers, packed (where appropriate) into the two halves of
a 32-bit long integer.. With Win32, object handles became 32-bit values and
this packing was (mostly) abandoned so that each parameter got the full
32-bits of a long integer to itself. This meant that programs had to be
rewritten if they wanted to use the 32-bit API instead of the original
16-bit API. However, many of the 32-bit APIs were re-birthed as an
Ex-version within the 16-bit API set, so it was actually possible to write
to the 16-bit API using 32-bit function calls, as long as you used the Ex
version of the function.

The 16- or 32-bit reference also applies to the size of the data element
used to carry the message identifier in a message. Windows lives and dies by
its messages, so this change was at least as significant as the change to
the size of a handle. Packing two 16-bit values into a single long integer
was pretty much standard for message parameters, because the number of
parameters was fixed, and the programmers needed to get as much information
into one message as possible. When handles grew to 32 bits, and needed the
long integer message parameter all to themselves, the messages had to change
to accommodate, and the programs had to change to match.

The link to the technology is tenuous. How the function call or message
parameters (32-bit long integers, mostly) are assigned to registers during
execution of the call depends on the CPU that the software is compiled for
and is pretty much irrelevant to the programmer. What is interesting is that
the designers of Windows 3.x chose to use 32-bit values for the function
call parameters (albeit, mostly using them as two 16-bit values packed
together) in the days when the hardware was exclusively 16-bit.

The 32-bit operating systems were larger and had more features (such as long
filenames) than the earlier versions. There is nothing to prevent a 16-bit
operating system having these features.
--
Jeff Richards
MS MVP (Windows - Shell/User)
"Bill in Co." wrote in message
...
Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just mean
the operand)? What exactly? (sorry, I must be having a blanked out
mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win
3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).




  #8  
Old October 21st 05, 01:51 PM
Tim Slattery
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

"Bill in Co." wrote:

Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just mean
the operand)? What exactly? (sorry, I must be having a blanked out mind
here!)


Basically, it's the length of a word, the unit that the machine works
with at its lowest level. 16-bit machines, like the 8086 and 80286,
use 16-bit words. So if a word is used to store a signed integer, that
integer can range from about -16,000 to +16,000. Both machines had
special instructions to handle larger numbers, but you had to remember
to use them when you needed them. Also, a 16-bit word would normally
limit you to a 16-bit address space, which would allow you to address
only 65,536 bytes in RAM. Both the 8086 and the 80206 had kludges to
allow you to address more than that (the 8086 could handle a 1MB
address space, the 80206 could go to 16MB). The fact that both of
these machines were 16-bitters meant that handling any data structure
larger than 65,536 bytes was tricky for the programmer.

The 80386 was a 32-bit machine and implemented a flat 32-bit address
space. No kludges, just give a number between 0 and 4,294,967,296 for
an address. No problem at all handling very large data structures.
Also, a normal signed integer now can range between about
-2,000,000,000 and +2,000,000,000. Clearly a program that runs in
32-bit mode and references (for example) memory location 3,245,098,345
is not going to work in 16-bit mode, where that is not a valid
address.

Intel (and MS) has so far maintained backwards compatibility, so your
P4 32-bit processor can be switched into a mode where it works like a
16-bit 8086 so that you can still run your DOS programs, or a
different mode where it works like an 80286 so you can run programs
from that era (Win3.x programs).

--
Tim Slattery
MS MVP(DTS)

  #9  
Old October 21st 05, 02:34 PM
dadiOH
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

Bill in Co. wrote:

If I remember right, there were 64K segments (two bytes), prior to
the 386, which was pretty limiting for most code.


It rather depends on what one was writing. If one needed to use values
greater than 2^16, one had to work around. OTOH, you can do a whole
*lot* with 2^16 registers...


--
dadiOH
____________________________

dadiOH's dandies v3.06...
....a help file of info about MP3s, recording from
LP/cassette and tips & tricks on this and that.
Get it at http://mysite.verizon.net/xico


  #10  
Old October 21st 05, 07:38 PM
Bill in Co.
external usenet poster
 
Posts: n/a
Default "16 bit" and "32 bit" code?

Well, I'm still getting a bit confused here. We have one guy saying the
term 16 bit or 32 bit indicates the size of the microprocessor's *internal
data bus*,

and another saying that term is indicating the address bus capabilities (and
I do vividly recall that the old 8086 had CS:IP, and each of those registers
was 16 bits, so if you needed to access anything more than 64K, you HAD to
change the damn CS too),

and this one from you, Jeff, talking about the Windows API calls (which I
know the least about - I'm assuming its basically a function call which has
some variable passed into it (or returned from it) to invoke a function in
Windows. Talking about this seems a bit different than talking just about
the microprocessor's address and data busses, but there must be some
relation.

And I understand what Tim said - to recap, with the 386 and above we got the
huge flat addressing space due to 32 bits being available for the *address*
bus, NOT the data bus.

I feel like I'm still missing something here.


Jeff Richards wrote:
You description is quite correct - the operating systems are associated

with
those terms. But it is very much a terminology that was used by Microsoft
to distinguish between the two generations of OS, and doesn't necessarily
have a solid basis in the technology.

The 16- or 32-bit reference is to the size of the data element used to
reference handles, or object identifiers, in the Windows API. In 16-bit
Windows most parameters (including the critical handle parameter) were
passed as 16 bit numbers, packed (where appropriate) into the two halves

of
a 32-bit long integer.. With Win32, object handles became 32-bit values

and
this packing was (mostly) abandoned so that each parameter got the full
32-bits of a long integer to itself. This meant that programs had to be
rewritten if they wanted to use the 32-bit API instead of the original
16-bit API. However, many of the 32-bit APIs were re-birthed as an
Ex-version within the 16-bit API set, so it was actually possible to write
to the 16-bit API using 32-bit function calls, as long as you used the Ex
version of the function.

The 16- or 32-bit reference also applies to the size of the data element
used to carry the message identifier in a message. Windows lives and dies

by
its messages, so this change was at least as significant as the change to
the size of a handle. Packing two 16-bit values into a single long integer
was pretty much standard for message parameters, because the number of
parameters was fixed, and the programmers needed to get as much

information
into one message as possible. When handles grew to 32 bits, and needed

the
long integer message parameter all to themselves, the messages had to

change
to accommodate, and the programs had to change to match.

The link to the technology is tenuous. How the function call or message
parameters (32-bit long integers, mostly) are assigned to registers during
execution of the call depends on the CPU that the software is compiled for
and is pretty much irrelevant to the programmer. What is interesting is

that
the designers of Windows 3.x chose to use 32-bit values for the function
call parameters (albeit, mostly using them as two 16-bit values packed
together) in the days when the hardware was exclusively 16-bit.

The 32-bit operating systems were larger and had more features (such as

long
filenames) than the earlier versions. There is nothing to prevent a

16-bit
operating system having these features.
--
Jeff Richards
MS MVP (Windows - Shell/User)
"Bill in Co." wrote in message
...
Can someone enlighten me here. Specifically WHAT are they referring to?

It seems that Win3.x is supposedly associated with the term "16 bit code"
and the Win 9x and above is associated with the term "32 bit code". Now
WHAT exactly are they referring to? The address bus? The data bus?
The machine code instruction length (seems doubtful, unless they just

mean
the operand)? What exactly? (sorry, I must be having a blanked out
mind
here!)

At any rate, it seems the "16 bit code" term is associated with some Win
3.x
apps, and can't handle long filenames (amongst other things), of course.
This is all probably tied in with the microprocessor and its instruction
sets. They didn't have 32 bit addressing until the 386 came along, as I
recall. Maybe that's it. (32 bits or 4 bytes).



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 09:02 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 Win98banter.
The comments are property of their posters.