[quote name='AdultLink']I see, I don't have the facts, and I'm ignorant, when people who actually USE the software mode, get 10 FPS.
On a small area. Using a Geforce4mx.
On a next-gen area, they'd get 1-5 fps.
Why don't YOU pay attention dumbass:
Using software mode is slow as hell.
If I knew there was a software mode I could've told you this a LONG time ago.[/quote]
Dude, you keep missing the obvious. You keep trying to compare apples and oranges. The performance strengths and weaknesses of different platforms vary wildly. The PC is no barometer of how other systems will perform at the same task. Even different brands of x86 processors have dramatically different performance at the same clock rate, as seen in AMD products sold under a performance rating to indicate what Pentium 4 clock rate is equivalent in performance. For instance, the Athlon 64 rated at 3000+, indicating performance equivalent or better in most areas to a 3.0 GHz Pentium 4, runs at just 2 GHz. Other processor architectures can substantially outperform an x86 at the same clock rate. This is especially true of the PowerPC line which, surprise, surprise, we find in the GameCube. On top of this the Gekko was substantially customized by IBM for console game applications. For instance, 64-bit registers that would have been useless for nearly all game use were changed to instead perform SIMD operations on 32 and 16-bit values, which added great value.
You've made a few assumptions that have led you astray. First, you've assumed what you saw in certain GameCube games is pixel shading of the sort produced on the Xbox and PC games in recent years by dedicated hardware units. In turn you've assumed the GC has hardware features that Nintendo has chosen to hide from public knowledge for unfathomable reasons and furthermore has hidden these features from most developers or somehow made them reluctant to use them in their projects. (Feel free to show some documentation for the existence of a pixel shader in the Flipper chip.) You may find this of interest, an interview in which a very reputable GameCube developer speaks of being able to apply the Flipper's pipeline in a manner that outputs equivalent to a shader. There is much software work required but he asserts being able to match Xbox:
http://www.planetgamecube.com/specials.cfm?action=profile&id=203 In other word, a versatile design can often match dedicated functions where it matters. It just take talent and work. Not every developer can manage that. Thus the lack of wider usage.
Second, you've assumed the PC was the ideal platform for attempting this operation in software compared to all other platforms. The only reason you'd see most people doing their experimenting on a PC (or Mac under OpenGL) is because that is the only open platform that encourages such without having to become a registered developer. Sure, there are people hacking consoles with DIY dev tools but doing it on a PC is much simpler if you're just seeking an understanding of the technology as opposed to any particular implementation. The X86 doesn't dominate the PC market because it's the best architecture and won over developers. It's there because the competing platforms were badly managed, thanks to being confined to a single company, and died with their makers. The use of PCs in business guaranteed the platform wasn't going anywhere and the multiple companies selling compatible machine further assured its stability. In the era when competing platforms were common the Motorola 680x0 series was much more popular, almost to the point of exclusivity among 16-bit systems including Apple, Atari, and Commodore. One of the few notable exceptions was the Acorn Archimedes which contained the first ARM chip. It was a good deal more powerful than the 80286 chips then common in PCs but was relatively unknown outside of the UK. The CPU used in the GBA, DS, and most PDAs and cellphones today is a direct descendent.
Not only can another processor have greatly better performance at a specific operation despite a lower clock speed, much in the way a minor amount of functionality accelerates DVD playback, a specific feature of a system that accelerates a major portion of the otherwise software operation can make a major difference. This is amply demonstrated in running two versions of an app on a modern processor with SIMD extensions. Assuming the application makes heavy use of the functions the SIMD set is intended to accelerate, the code that uses SIMD rather than doing each operation individually will be much faster. Without having dedicated shaders a set of CPU and coprocessor functions that speed up a subset of the operation can provide enough of an improvement to make it usable in application where another platform lacking these traits would not be able to deliver. As indicated by Julian Eggbrecht in the interview linked above, the GameCube does offer such an ability to accelerate such thing depite lacking a complete solution in hardware.
(Side note, you may have noticed x86 processors have nearly no presence in video game systems. There was a British company, Konix, that had a system with a very elaborate reconfigurable controller that used an 8086 CPU but the company lacked the funds to reach launch.
http://home.wanadoo.nl/hessel.meun/konix/konix-main.htm The next closest thing to a x86 console was the Tandy VIS, a machine Microsoft and Tandy created as a competitor to CD-i, as if CD-i represented a market worth grabbing at. It was OK for CD-ROM edutainment sort of stuff but essentially a historical footnote.)
Third, you made several assumptions about the swShader application. You assume on the basis of no data that it is the best that can be done on the platform in question. You also make assumptions about the equipment of the person who said he was getting 10 FPS when he didn't offer any such information at all. There is no telling what he is using and thus no basis for any comparison to anything attempting the same task in software. Then finally you do something that goes beyond assumption, it is an outright mistake. The site for swShader states plainly that it emulates the DX9 class shaders. If you've ever looked intot he specs of DX9 chips you'll see they're massively larger in terms of transistor counts than their DX8 predecessors. This is because a 2.0 and higher shader is much, much more complex than a DX8 shader. It uses floting point at high accuracy levels end to end (though Nvidia and ATI have differed in minimum accuracy requirements), supports a much fuller instruction set, and support massively larger shader programs. To produce a DX9 shader in software involves almost two orders of magnitude greater processing than doing a DX8 shader in software. In others words, you aren't comparing the GameCube to an Xbox, you're comparing it to the functionality of a GPU almost four years more advanced than the XGPU.
So if you look at the situation with some clarity, it becoems apparent the GameCube has a lot of power but not everyone knows how to apply it fully or is willing to do the work.