If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#52
|
|||
|
|||
Virtual Machine and NTFS
"mm" wrote in message
... You know, until just now, I figured there was something like DOS to access NTFS partitions. It never occurred to me that there wouldn't be. While XP's recovery console was severely crippled, it is not so for Vista's or Windows 7's - and there is also support for NTFS in Linux. |
#53
|
|||
|
|||
Virtual Machine and NTFS
On 10/19/2010 1:17 AM, Philo Pastry wrote:
John John - MVP wrote: Let's address your blatant lie: "What you don't understand about NTFS is that it will silently delete user-data to restore it's own integrity as a way to cope with a failed transaction..." It is you who doesn't understand anything about how NTFS works so you spread lies and nonsence! NTFS DOES NOT silently delete user data to replace it to restore it's own integrity and C. Quirke does not in anyway say that in his blog. Perhaps you have a reading comprehension problem. This is what Quirk says, and what I've experienced first-hand when I see IIS log file data being wiped away because of power failures: ----------- It also means that all data that was being written is smoothly and seamlessly lost. The small print in the articles on Transaction Rollback make it clear that only the metadata is preserved; "user data" (i.e. the actual content of the file) is not preserved. ----------- You REALLY don't understand anything! What Chris is saying is that when writes are interrupted the *NEW* data being written is not kept, not that what is *already* on the disk (flushed) is discarded or in anyway deleted! Listen, most of us who have been using NTFS have at one time or another experienced glitches, crashes or unprotected power failures while working with files, with NTFS when the computer is rebooted most of the time it's like nothing happened at all, you might have lost the work that was being saved at the time of the crash but the file itself and what was successfully saved and flushed while working will still be stored on the disk and will still be intact, don't try to lie and twist the facts, everyone reading here will see right through your lies! Your statement that NTFS silently deletes user-data to restore it's own integrity was made in ignorance and to make readers think that any and all of their files are at risk as NTFS will modify their user data, the false statement even gives the impression that this will even happen on files that are not being used. Do you understand the difference between metadata and "user data" ? Oh please, don't try to be smart and to obfuscate the issue by trying to bring in things that will only end up biting you in the a$$! If you are so smart about metadata you should already know that some of it is user defined or user owned! Or do you think that the file system should sacrifice critical system metadata and risk corrupting the MFT in order to try save user data which was damaged or lost during a write operation? Are you saying that the file system should not first and foremost attempt to guarantee the integrity of of the file system structure and the safe keeping of all the files on the disk at the expense of one user file when glitches and failures occur? What is being described is journaling and it is perfectly normal NTFS behaviour, this journaling ensures atomicity of the write operations. Journalling ensures the *complete-ness* of write operations. Partially completed writes are rolled back to their last complete state. That can mean that user-data is lost. It means that the incomplete write was not flushed to the disk and that the old version of the file will not be updated, what will be lost will be what was in the RAM when the file system was attempting to commit and flush it to the disk! You on the other hand seem to think that it is preferable to have the file system keep incomplete or corrupt write operations and then have scandisk run at boot time so that it may /try/ to recover lost clusters or so that it may save damaged file segments In my experience, drive reliability, internal caching and bad-sector re-mapping have made most of what NTFS does redundant. The odd thing is - I don't believe I've ever had to resort to scouring through .chk files for data that was actually part of any sort of user file that was corrupted. Any time I've come across .chk files, I've never actually had any use for them. And I can tell you that I would really be ****ed off if I was working on a file on an NTFS system and it suffered a power failure or some other sort of interruption and my file got journalled back to some earlier state just because the file system didn't fully journal it's present state or last write operation. You still don't understand, the last successful write will be present, what was successfully saved and flushed while you were working with the file will be intact. I've seen too many examples of NT-server log files that contain actual and up-to-date data one hour, and because of a power failure the system comes back up and half the stuff that *was* in the log file is gone. That's an example of meta-data being preserved at the sake of user data. You're lying again and the above statement proves beyond the shadow of a doubt that you have absolutely no experience whatsoever on NT server systems! Look, no one is saying that everything NTFS is perfect and that data loss never occurs with NTFS, that is why smart computer users keep backups! On the other hand stop lying about things you know nothing of and stop trying to make us believe that FAT32 is more robust than NTFS, those who have real life experience know better. FAT32 has some advantages in certain situations and NTFS has advantages in other situations, by and large in today's computing environment for most users the advantages offered by NTFS far outweigh those offered by FAT32. John |
#54
|
|||
|
|||
Virtual Machine and NTFS
Sunny wrote:
OK, explain how I get (Using Acronis True Image Backup) "The incremental backup will exceed the 4Gb limit in your backup file location" Simple. Acronis doesn't have the brains to split it's backup files into 4 gb chunks. Which is a useful feature the user might want even if it was being written to an NTFS volume. |
#55
|
|||
|
|||
Virtual Machine and NTFS
Bill in Co wrote:
I formatted a 500 gb drive as a single FAT32 volume using 4kb cluser size just as an excercise to test if Windows 98se could be installed and function on such a volume, and it did - with the exception that it would not create a swap file on such a volume. Well, that's really nice. No swap file? Great. I created a swap file on a second hard drive that had a smaller-sized volume. (Plus the other utilities you said that won't work anymore (like the much faster version of Defrag from WinME). Those utilities will work on volumes that have around 25 to 30 million clusters. Again, this far exceeds the upper limit of 4.2 million that microsoft claimed was the max number of clusters for a FAT32 volume. And as Chen mentions, yes - the *first* directory command on FAT32 volumes with a high cluster-count does take a few minutes (but not successive directory commands). A few *minutes*???? Are you kidding me??? THAT is totally unacceptible. Sure, but that's if you've booted the machine into DOS. I really don't remember if it took that long to view the drive in explorer under win-98 or not, and there is no such delay to view the drive under XP. So the delay is not so much the fault of the file system as it is the overlying OS and the strategy it uses to compute free space - and whether or not it has to compute free space each and every time the drive is viewed, or whether it can save that info somewhere on the drive without having to recompute it periodically. With all the things you've mentioned it sure seems like there is a price to pay. When it comes to running XP on a FAT32 drive, the only price is a max file size of 4 gb. The benefits are a more accessible and portable file system, more third-party tools and utilities, faster performance, arguably better / simpler data recoverability (and I don't mean the creation of .chk files when I say that). Oh yeah, not the least of which is you can't *ever* have a file larger than 4 GB (this can be a pit of a PIA for some photo, video, and disk imaging work) Like I said earlier, I've seen Adobe Premier CS3 on an XP system running on a FAT32 drive create large video files by segmenting the output across multiple 4 gb files automatically. What I found in my testing that either in DOS or under Win-98, that the first dir command (or explorer-view) is instantaneous as long as the number of clusters doesn't exceed 6.3 million. This equates to a FAT size of about 25 mb. Which is a LONG ways from the 500 MB mentioned. 6.3 million clusters, at 32 kb each, results in a 200 gb volume, which isin't a LONG way from 500 gb. If you want the first DOS dir command to be instantaneous, limit the number of clusters to be 6.3 million (max volume size = 200 gb, 32kb cluster size). If you can tolerate the first dir to be up to several minutes, then DOS is compatible with many millions of clusters on a FAT32 drive - at least 120 million. If running win-98 and you want all your tools and diagnostic programs to run, limit the number of clusters to 30 million (max volume size = 980 gb, 32 kb cluster size). If running XP, I'm not aware of any limit to the number of clusters affecting the performance of the volume or latency of any drive operation. |
#56
|
|||
|
|||
Virtual Machine and NTFS
On 10/18/2010 11:01 PM, Philo Pastry wrote:
John John - MVP wrote: People working with video editing and multimedia files often run across this 4GB file limitations. Backup/imaging utilities also often run into problems caused by this file size limitation, About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard drive and installed Adobe Premier CS3. It had no problems creating large video files that spanned the 4 gb file-size limit of FAT32. Wow! How absolutely unbeleivable! Now you are telling us that you broke the binary limits of the FAT32 file system! The BS never stops...what next? |
#57
|
|||
|
|||
Virtual Machine and NTFS
On 10/19/2010 10:01 AM, Philo Pastry wrote:
Sunny wrote: OK, explain how I get (Using Acronis True Image Backup) "The incremental backup will exceed the 4Gb limit in your backup file location" Simple. Acronis doesn't have the brains to split it's backup files into 4 gb chunks. Which is a useful feature the user might want even if it was being written to an NTFS volume. Splitting a file in multiple segments of less than 4GB and then saying that you created files greater than 4GB on FAT32 is just you trying to spread more of your lies and BS! You just never give up with your nonsense! |
#58
|
|||
|
|||
Virtual Machine and NTFS
mm wrote:
I'll say this. At first when win98FE crashed, I would find files that were missing Which proves my point. How long ago does your recollection date to? Win-98, first edition? So we are talking about 10, 12 years ago? That's when many people formed their impressions of win-98 and FAT32, back when you might have had 8 or 16 mb of system ram, and when motherboards and video cards and drivers and application software were barely functional for anything beyond 30 minutes of operation. Microsoft came out with XP when the reliability and performance of PC hardware took a major improvement turn in late 2002 / early 2003, when PC's had 256 if not 512 mb of ram and hard drives started to do their own internal error correction and began to have descent-sized internal cache buffers. Of course, millions of home XP-pc's were soon used as botnet trojans because XP was designed to be used by corporations, managed by IT staff, behind hardware firewalls and other sophisticated network appliances, but none of that sank in to most people - because XP was the emporer with no clothes from 2002 though late 2006 at least. Or, if you've installed DOS first on an FAT32 drive, and then install XP as a second OS, you can have a choice at boot-up to run DOS or XP. Why not just put all the dos files in the XP partition, and use a dos boot disk to boot to that? Who wants to mess with a dos boot disk? On some of my XP systems, I start with a large drive, divide it up into the volumes I want, format all volumes as FAT32 with a custom-selected cluster size, and then I install DOS 7.1 so that the drive boots into DOS on C drive. I then install XP onto C as well, and when the system boots I get a menu asking if I want to boot into DOS or XP. What could be simpler or more ergonomic than that? |
#59
|
|||
|
|||
Virtual Machine and NTFS
On 10/19/2010 6:08 AM, mm wrote:
You know, until just now, I figured there was something like DOS to access NTFS partitions. It never occurred to me that there wouldn't be. You can use a PE disk like UBCD4Win, or a live Linux CD, or you can mount the disk in another Windows 2000/XP/Vista/7 machine. The DOS over Recovery Console argument is a non-issue, better methods have long been available. John |
#60
|
|||
|
|||
Virtual Machine and NTFS
John John - MVP wrote:
I've seen too many examples of NT-server log files that contain actual and up-to-date data one hour, and because of a power failure the system comes back up and half the stuff that *was* in the log file is gone. You're lying again and the above statement proves beyond the shadow of a doubt that you have absolutely no experience whatsoever on NT server systems! We have an NT-4 SERVER running an IIS website. A log file of web-server hits is created daily. At the end of each day, the current log file is closed and the next log file is opened. I can access these log files on our local LAN, and I can even copy an image of the current log file from the NT4 server to my machine. Every time there was a power failure, not only would ALL the data in the current log file be replaced with nulls after the server was rebooted, but so too was the data in the 14 previous-days log files. Their file size was not altered or changed - but all the data they contained was replaced by nulls. A fine example of NTFS journalling. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
No sounds in Windows 98 on virtual machine | Larry | General | 0 | November 15th 09 06:06 PM |
virtual machine | Joni | General | 4 | March 28th 05 11:14 PM |
Ccleaner - Virtual Machine | Solkeys | General | 10 | February 14th 05 03:12 AM |
problem with my virtual machine | shawnk | General | 0 | June 19th 04 11:35 PM |
MS Virtual Machine | Advice please | General | 3 | June 8th 04 10:04 PM |