Edit: there are some good points raised here:
https://www.quora.com/Why-did-C-replace ... -mid-1990s
It is kind of worth nothing too that up to the 16-bit machines it was still possible for a human programmer to write better code than a Compiler and that kind of tapered off when pipe-lining took over as the more dominant feature of CPU design. So until about the 68020 it was still possible to "out smart" the Compiler if you were a fairly well experienced Assembler developer. However, you barely stand a chance with the processors going back to the mid 90s onwards. I once worked at a company in Cambridge who bought another company who specialised in writing highly optimised Codecs and they had a real problem most of the time hand rolling the code on the more powerful PPC and ARM processors. It ended up being a sort of "see what the Compiler does then correct if you need to" kind of modus-operandi and it often took a long time for them to get it right. Back in the day on the Amiga/atari/etc. this was not the case.
With GCC, and although there are other C Compilers around for general 68K targets, GCC would be the more actively developed and dominant now, well you have to ask yourself where the need would be to further improve the 68K back-end within GCC? How many new projects have a 68K core and is it really worth it to expend the development effort on improving things for 68K targets? I very much doubt it because I know that over the last 13 years we have seen a lot of work done for the ARM targets and other RISC targets so you could surmise that the future is RISC and the past is very much so CISC - which means the 68K. We should probably look into benchmarks. I tend to see more discussion on the Amiga forums about tools, Compilers and Assemblers to be honest than on here but I can tell you that very few Amiga developers write in C and most write in Assembly - what do they know that Sprites Mind doesn't?
Stef wrote:Definitely you can already do *a lot* with C.
You can but can you do a lot on the MD with C? It boils down to what you are trying to achieve I feel.
Stef wrote:
Writing 100% assembly code is imo just a waste of time as only small portions of your code (the bottlenecks) will probably require assembly optimizations.
Well, this again boils down to design: if I design my code in Assembly to carry out certain actions fast then I know that *way* ahead of time and long before it becomes a bottleneck that you may find in your C code later on. I optimise *at the Assembly level* but I have perhaps used 68K and C longer than many of you.
Stef wrote:
The first step in optimization is the implementation itself (using adapted structures and algos), then you can optimize your C code to help generating better code and in final step if you need more speed you can pass in assembly for the part which require it.
No, the first step is design correctly and foresee the type of trade offs you are willing to accept early on.
What neither you nor cero have mentioned is that the insight that the Assembly developer has into how the MD hardware functions is what gives them the ability to determine how well things are executing - down to the individual machine cycle. I am quite sure that yuji naka was very aware of machine-cycles and indeed he worked out a suitable design prior to implementation, but then, he was an experienced, professional who had enough low-level experience at the time he wrote Sonic the Hedgehog.
It is also worth nothing that for poorly designed code you will spend more than 50% of your time debugging and hardware related issues will not be as easy to 'see' at the C level but I find, at least, that they stand out in Assembly - we may differ.
Stef wrote:
But even using the best ASM optimized code compared to good C code you will obtain a gain of 30% up to 100% for best case.
Well, when we work at the Assembly level and we work on our design - our solution, we are optimising our code structure and solution for the problem that presents itself. We are aware of what the hardware is doing and we know what we want to achieve and things are pretty much straight-forward when we look at Assembly language. I am not sure how much hardware and low-level work you have done yourself, Stef, but I can tell you that Compilers and even hardware (these days with all the FPGAs and CPLDs and other hybrids around) "lies" and you will hardly ever see that manifest itself at the '
C level'. As such, for performance and design criteria and if you ever need to reverse and commercial code or even a Demo you are better off spending the time learning 68K and Z80 in the MD's case.
We also need to factor in code size: now this boils down to how you appreciate ROM size: in the 90s and 80s memory was costly so you tend to find most MD games used compression schemes often throughout with 'just in time' data being piped out during execution.
Have you worked out how much space you save writing in pure 68K / Z80 compared to writing in pure C? Size of the ROM image is an important factor still but not as much so as it was back when the MD was still releasing commercially.
But I agree, SGDK and C in general are good enough for beginners and people who have no experience of the MD hardware as it gives them a 'first start' fairly quickly but it pales when compared to Assembly on low-end hardware - am in the minority that I count machine cycles? You will never see any div/mul/etc. in my code as it costs way too much but how many of you chaps use that opcode implicitly in your C code?
Cheers,
Minty.