View Single Post
  #4  
Old July 24th 07, 11:53 PM posted to microsoft.public.win98.gen_discussion,microsoft.public.win98.disks.general,alt.windows98
Stuart Miller
External Usenet User
 
Posts: 7
Default Windows 98 large file-count tests on large volume (500 gb hard drive)


"98 Guy" wrote in message ...
Stuart Miller wrote:

Something here does not make sense to me. Here is a clip
from your post in response to one of mine a few weeks ago.

..............................
For volumes larger than 64 gb, the cluster size remains at 32kb,
but the cluster count is allowed to exceed 2 million. (...)
..............................

Are you using some third party vfat driver?
Or some other formatting program?


The drive in question was formatted with Western Digital "Data
Lifeguard Tools" version 11.2 for DOS:

http://support.wdc.com/download/downloadxml.asp#53
http://websupport.wdc.com/rd.asp?p=s.../DLG_V11_2.zip

It creates a bootable floppy with drive-formatting software (I believe
it's some version of OnTrack's Disk Manager software). It allows for
the quick partioning and formatting of WD drives. For FAT-32, it
allows the user to choose the cluster size, from 512 bytes up to 32
kb.

Thank you for that info. You are bypassing some of the windows disk
management routines, so it would be natural to expect better results and
fewer limitations. (even when ms-dos first came out, there were file systems
in use which were far superior to fat-16, but I'll skip the rant about such
things)
We did this a number of times over the years, with varios bios and ms-dos
limitations.
I don't remember the specific limits, but I recall 1 gig hard drives were a
problem in dos.

Question - what does this do with the 2gig/4gig file size limit?
I use both numbers, because fat-32 can not create a file bigger than 4 gigs,
but it can not copy files between 2 and 4 gigs.


I don't think there is a specific directory size (number of
entries) limitation, except in the root directory.


Actually, I think that FAT and FAT-16 had a limit of something like
512 entries in the root directory (I remember some win-95 systems that
didn't work properly when the number of files in the root directory
reached 512).


This is a ms-dos restriction, and applies to all fat-12 and fat-16 systems.
ms-dos (which is win 95 & 98) would not create any more entries after a
specific number.

I believe I read somewhere that FAT-32 does not
allocate a fixed size for the directory hence there is no practical
limitation as to how many entries the directory (root or otherwise)
can contain.


I think the methodology is reasonable, but I have two concerns.
First, we know that windoze places files somewhat arbitrarily
on the hard drive, although fat32 is more 'front to back' then
ntfs.
I would like to see a scandisk map (possibly using norton
defrag, not practical using scandisk) to show that the 'back'
of the hard drive is empty, and some proof that scandisk
can read and write those sectors.


I'm not sure I understand what you're trying to determine.


I recall some problem with partitions above a certain size, where windows
would create the files in the 'back' of the partition (after a certain byte
count) , but then be unable to read them, or be unable to defrag them. This
was related to bios settings and windows limits, but I think you may have
bypassed that problem


Since I've blown past the 137 gb barrier by filling a 500 gb drive
with 400 gb of material, does it matter *how* the drive is filled
(either physically or logically) ?


Not really, now that I know how you did it.


What is the significance of the "back" end of the hard drive, and
whether it is used or empty?

as above.


We know that I started with 121 million clusters, and I've used
slightly over 100 million of them in this test. The back-end is
pretty small at this point.

As I recall, the problem is not in creating the files,
it is in using them Second, I would like to see some
files written to the back of the hard drive and
successfully read, updated and re-read.


Since I have 540 replicated sets of files, would a series of random
file-comparisons made on those sets suffice to show that win-98 is
able to retrieve the files and perform a byte-level comparison on
them? Would such a test demonstrate the integrity of the file system
as well as win-98's ability to work with it?


Comparisons of the written files only proves that both were written
correctly. I am concerned about the ability to randomly update files past
the usual limits. Maybe 'randomly update' is a poor choice of words, as
files are not updated in place - a new file is written, then the old one
'erased'. But I am sure you understand what I mean here.

hmmm put the windows registry or swap file way back there and see what
happens.

I'm very interested in this, but I know I won't ever use it. I have 200 and
300 gig drives on my file server, which runs linux.

Stuart