View Single Post
  #8  
Old July 25th 07, 06:15 AM posted to microsoft.public.win98.gen_discussion,microsoft.public.win98.disks.general,alt.windows98
98 Guy
External Usenet User
 
Posts: 2,951
Default Windows 98 large file-count tests on large volume (500 gb harddrive)

Star@*.* wrote:

They will offer alternate methods but will never got to the
lengths you have to prove or disprove a point.


It would be good to have someone else replicate what I've done. I
can't be the only one with a bunch of new motherboards, hard drives,
cpu's and memory sitting around...

Just tell them to F**K off and do their own testing or ignore
everything/anything you have done.


Doing something along those lines has crossed my mind. Recently.

PS I have always been told the problem of large number of
clusters in 98 was due to the fact that on boot the FAT
Table was read into memory and would use up all available
memory just to hold the FAT Table.


That argument was offered back last February when I first tried
running win-98 on fat-32 volumes with large cluster counts.

I countered by pointing out that by Microsoft's own reasoning, a
volume was never allowed to have more than 4.177 million clusters
because that was the largest number of clusters that DOS scandisk
could process given a supposed 16 mb array size limitation. They
mentioned nothing about windows needing to load the entire FAT table
during normal use. And besides, given Win98's specified minimum
requirements (16 mb ram), you'd have a situation where a good chunk of
that would have been consumed by the FAT.

I've since discovered that DOS scandisk has no such 16 mb memory
limitation. Or perhaps it does, but it doesn't effect it's ability to
process a FAT with more than 4 million clusters. I think that the
only time the entire FAT table is read into system memory is during
disk maintainence like Windows scandisk and defrag.

If it's true that you need 4 bytes per cluster to read in the FAT
table, then maybe if I put more memory into the system the windows-ME
versions of scandisk and defrag would work. I think I'll try that.

If this were true it seems that with your 500G test all
available memory would be used and there would be nothing
left for programs.


Yes, that would have to be the case given 121 million clusters in my
situation (with 512 mb installed memory).

It also seems that your boot times would be in minutes not
seconds just to read the FAT Table.


The system boots fast, certainly within 1 minute. I haven't timed it
yet.