CJaysMusic
There is no formal testing that states you get 50 percent CPU performace from a 32bit OS to a 64bit OS. Its propaganda and its missleading innocent forum members.
Fred's sig is a lie. I dont know how else to say it, but just to say it. I have nothing against Fred, except for his missleading statements and missleading signature
Cj
It's cool CJ!
I have nothing against you either!
So here we go! I will add more INFO later for you unbelievers!
BEST overall OS system
http://www.passmark.com/baselines/top.html WHY INTEL and not AMD! Here we go! All test made in 64bit OS!
http://www.cpubenchmark.net/common_cpus.html SSD-disk are faster and better than conventional SATA disks!
http://www.harddrivebenchmark.net/high_end_drives.html http://it.anandtech.com/IT/showdoc.aspx?i=3532 32 vs 64 bit
A change from a
32-bit to a 64-bit architecture is a fundamental alteration, as most
operating systems must be extensively modified to take advantage of the new architecture. Other software must also be
ported to use the new capabilities; older software is usually supported through either a
hardware compatibility mode (in which the new processors support the older 32-bit version of the instruction set as well as the 64-bit version), through software
emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor (as with the Itanium processors from Intel, which include an
x86 processor core to run 32-bit x86 applications). The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications
[9].
One significant exception to this is the
AS/400, whose software runs on a virtual
ISA, called TIMI (Technology Independent Machine Interface) which is translated to native machine code by low-level software before being executed. The low-level software is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from the older 32/48-bit "IMPI" instruction set to 64-bit PowerPC (IMPI wasn't anything like 32-bit PowerPC, so this was an even bigger transition than from a 32-bit version of an instruction set to a 64-bit version of the same instruction set).
While 64-bit architectures indisputably make working with large data sets in applications such as
digital video, scientific computing, and large
databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks. In
x86-64 architecture (AMD64), the majority of the 32-bit operating systems and applications are able to run smoothly on the 64-bit hardware.
Sun's 64-bit Java virtual machines are slower to start up than their 32-bit virtual machines because Sun has only implemented the "server"
JIT compiler (C2) for 64-bit platforms.
[10] The "client" JIT compiler (C1), which produces less efficient code but compiles much faster, is unavailable on 64-bit platforms.
Speed is not the only factor to consider in a comparison of 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering—for HPC (
high-performance computing)—may be more suited to a 64-bit architecture given the correct deployment. 64-bit clusters have been widely deployed in large organizations such as IBM, HP and Microsoft, for this reason.
Pros and cons
A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of
main memory. This is not entirely true:
- Some operating systems reserve portions of process address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, Windows XP DLLs and other user mode OS components are mapped into each process's address space, leaving only 2 to 3 GB (depending on the settings) address space available. This restriction is not present in 64-bit operating systems.
- Memory-mapped files are becoming more difficult to implement in 32-bit architectures, especially due to the introduction of relatively cheap recordable DVD technology. A 4 GB file is no longer uncommon, and such large files cannot be memory mapped easily to 32-bit architectures; only a region of the file can be mapped into the address space, and to access such a file by memory mapping, those regions will have to be mapped into and out of the address space as needed. This is a problem, as memory mapping remains one of the most efficient disk-to-memory methods, when properly implemented by the OS.
- Some programs such as data encryption software can benefit greatly from 64-bit registers (if the software is 64-bit compiled) and effectively execute 3 to 5 times faster on 64-bit than on 32-bit.
- Some complex numerical analysis algorithms are limited in their precision by the errors that can creep in because not all floating point numbers can be accurately represented with a small number of bits. Creeping inaccuracies can lead to incorrect results, often leading to attempts to divide by zero, or to not identify two quantities as being identical for practical purposes. International Computers Limited added 128-bit support to the ICL 2900 Series in 1974 largely as a result of requests from the scientific community.
The main disadvantage of 64-bit architectures is that relative to 32-bit architectures the same data occupies more space in memory (due to swollen pointers and possibly other types and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache utilization. Maintaining a partial 32-bit model is one way to handle this and is in general reasonably effective. In fact, the highly performance-oriented
z/OS operating system takes this approach currently, requiring program code to reside in any number of
32-bit address spaces while data objects can (optionally) reside in 64-bit regions.
Currently, most proprietary
x86 software is compiled into 32-bit code, not 64-bit code, so it does not take advantage of the larger 64-bit address space or wider 64-bit registers and data paths on x86 processors, or the additional registers in 64-bit mode. However, users of most RISC platforms, and users of
free or
open source operating systems (where the
source code is available for recompiling with a 64-bit compiler) have been able to use exclusive 64-bit computing environments for years. Not all such applications require a large address space nor manipulate 64-bit data items, so they wouldn't benefit from the larger address space or wider registers and data paths. The main advantage to 64-bit versions of such applications is the ability to access more registers in the
x86-64 architecture.
Software availability
x86-based 64-bit systems sometimes lack equivalents to
software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible
device drivers. Although most software can run in a 32-bit compatibility mode (also known as an
emulation mode, e.g. Microsoft
WoW64 Technology for IA64) or run in 32-bit mode natively (on AMD64), it is usually impossible to run a driver (or similar software) in that mode since such a
program usually runs in between the
OS and the hardware, where direct emulation cannot be employed. Because 64-bit drivers for most devices were not available until early 2007, using 64-bit Microsoft Windows operating system was considered impractical. However the trend is changing towards 64-bit computing as most manufacturers provide both 32-bit and 64-bit drivers nowadays. It should be noted that Linux/Unix operating systems do not have such problems with open source drivers that are already available for a 32-bit os, as 64 bit builds can be made from them.
Because device drivers in operating systems with
monolithic kernels, and in many operating systems with
hybrid kernels, execute within the operating system kernel, it is possible to run the kernel as a 32-bit process while still supporting 64-bit user processes. This provides the memory and performance benefits of 64-bit for users without breaking binary compatibility with existing 32-bit device drivers, at the cost of some additional overhead within the kernel. This is the mechanism by which older versions of
Mac OS X enables 64-bit processes while still supporting 32-bit device drivers.
64-bit data models
Converting application software written in a
high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common recurring problem is that some programmers assume that
pointers have the same length as some other data type. These programmers assume they can transfer quantities between these data types without losing information. Those assumptions happen to be true on some 32-bit machines (and even some 16-bit machines), but they are no longer true on 64-bit machines. The
C programming language and its descendant
C++ make it particularly easy to make this sort of mistake. Differences between the
C89 and
C99 language standards also exacerbate the problem
[11] To avoid this mistake in C and C++, the sizeof operator can be used to determine the size of these primitive types if decisions based on their size need to be made, both at compile- and run-time. Also, the <
limits.h> header in the
C99 standard, and numeric_limits class in <limits> header in the C++ standard, give more helpful info; sizeof only returns the size in
chars. This used to be misleading, because the standards leave the definition of the CHAR_BIT macro, and therefore the number of bits in a
char, to the implementations. However, except for those compilers targeting
DSPs, "64 bits == 8 chars of 8 bits each" has become the norm.
One needs to be careful to use the ptrdiff_t type (in the standard header <stddef.h>) for the result of subtracting two pointers; too much code incorrectly uses "int" or "long" instead. To represent a pointer (rather than a pointer difference) as an integer, use uintptr_t where available (it is only defined in C99, but some compilers otherwise conforming to an earlier version of the standard offer it as an extension).
Neither C nor C++ define the length of a pointer, int, or long to be a specific number of bits. C99, however,
stdint.h provides names for integer types with certain numbers of bits where those types are available.
Specific data models
In most programming environments on 32-bit machines, pointers, "int" types, and "long" types are all 32 bits wide.
However, in many programming environments on 64-bit machines, "int" variables are still 32 bits wide, but "long"s and pointers are 64 bits wide. These are described as having an
LP64 data model. Another alternative is the
ILP64 data model in which all three data types are 64 bits wide, and even
SILP64 where "short" variables are also 64 bits wide
[citation needed]. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment without changes. Another alternative is the
LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. "LL" refers to the "long long" type, which is at least 64 bits on all platforms, including 32-bit environments.
64-bit data models Data model short int long long long pointers Sample operating systems LLP64 16 32 32 64 64 Microsoft Win64 (X64/IA64) LP64 16 32 64 64 64 Most
Unix and
Unix-like systems (
Solaris,
Linux, etc.) ILP64 16 64 64 64 64 HAL SILP64 64 64 64 64 64 ?
Many 64-bit compilers today use the
LP64 model (including Solaris,
AIX,
HP-UX, Linux,
Mac OS X,
FreeBSD, and IBM
z/OS native compilers). Microsoft's VC++ compiler uses the
LLP64 model. The disadvantage of the LP64 model is that storing a long into an int may overflow. On the other hand, casting a pointer to a long will work. In the LLP model, the reverse is true. These are not problems which affect fully standard-compliant code but code is often written with implicit assumptions about the widths of integer types.
Note that a programming model is a choice made on a per-compiler basis, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS API typically dominates.
Another consideration is the data model used for
drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for
DMA. As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an
IOMMU.
Microsoft! If you want to be sure that your PC will be able to take advantage of increased memory and new hardware and software in the years ahead, a 64-bit PC is a good choice. If you run a lot of programs at once and switch back and forth between them often, a 64-bit PC can give you a more seamless, instantaneous response. And the more memory you have in your PC, the more programs you can run smoothly and simultaneously. http://www.microsoft.com/windows/windows-vista/compare-editions/64-bit.aspx
post edited by Freddie H - September 04, 09 3:29 AM