I think version 2.9 had a linker bug. then crt0 were made that are crazy but work properly on that version.
and then I got this in my batch files
sdcc -v > compilerversion.txt
then still years later one can see which compiler version was used.
I found strange thing... Having assembler REL files compiled, I execute SDCC to compile C program, and then hex2bin to convert to binary. With no changes to C source and no changes to REL files hex2bin displays different values, for example
Lowest address = 00000100 Highest address = 00001AC2 Pad Byte = FF 8-bit Checksum = 6E
Lowest address = 00000100 Highest address = 00001AC6 Pad Byte = FF 8-bit Checksum = D5
Lowest address = 00000100 Highest address = 00001ACF Pad Byte = FF 8-bit Checksum = 8D
How can it be? I would expect same sources and object files always producing same executable - in terms of size and in terms of contents.
I have upgraded to SDCC 3.8.0 #10562 (MINGW64) from the version provided by Eric in archive.
Sdcc 3.8 version introduce new bugs in SDCC. One already experimented is the : z80instructionSize() failed to parse '.dw 0x1243'
see here : https://sourceforge.net/p/sdcc/bugs/2830/?fbclid=IwAR0j7N305...
I still recommend 3.6. Also because 3.6 produce a smaller code than the 3.8
I still recommend 3.6.
reverted back to your version. Same issue - different sizes after compilation and linking. I did not look into what is involved in terms of end product (executable produced by compiler/linker), hopefully the issue does not affect operability of the code.
I agree with Eugeny. I've seen that happening also!
By the way, is it possible to enable SDASZ80 to understand unofficial Z80 opcodes?
By the way, is it possible to enable SDASZ80 to understand unofficial Z80 opcodes?
You can write some macro, to "simulate" some of theses functions.
Sure.
That's exactly what I've been doing!
Thanks!
By the way, is it possible to enable SDASZ80 to understand unofficial Z80 opcodes?
I use .db.
I have the following code:
int w=((int)(*(char*)0xf3b0))/4;
and it translates to
;gr8net-c.c:304: int w=((int)(*(char*)0xf3b0))/4; ld hl,#0xf3b0 ld c,(hl) ld b,#0x00 ld -2 (ix),c ld -1 (ix),b bit 7, b jr Z,00123$ inc bc inc bc inc bc ld -2 (ix),c ld -1 (ix),b 00123$: ld a,-2 (ix) ld -4 (ix),a ld a,-1 (ix) ld -3 (ix),a sra -3 (ix) rr -4 (ix) sra -3 (ix) rr -4 (ix)
Compiler is so stupid checking sign bit of the value which is always positive, adding 13 extra bytes to the program.
Also I found out that defining local variables is a bad idea - it uses stack and IX references slowing execution down and growing the size of the program.
Using global variables is somehow better, but not the ideal:
;gr8net-c.c:310: w=((int)(*(char*)0xf3b0))/4; ld hl,#0xf3b0 ld l,(hl) ld h,#0x00 ld c,l ld e,h bit 7, h jr Z,00123$ inc hl inc hl inc hl ld c,l ld e,h 00123$: ld iy,#_w ld 0 (iy),c ld 1 (iy),e sra 1 (iy) rr 0 (iy) sra 1 (iy) rr 0 (iy)
This one looks much better:
;gr8net-c.c:310: w=(int)((*(char*)0xf3b0)>>2); ld hl,#0xf3b0 ld c,(hl) srl c srl c ld iy,#_w ld 0 (iy),c ld 1 (iy),#0x00
but still not clear why using IY if it is as simple as LD B,#0 / LD (#_w),bc
I would implement it as
ld a,(#0xf3b0) srl a srl a ld l,a ld h,#0 ld (#_w),hl
That's why I prefer assembly...
that code with sign bit test and those 3 INC is funny, it must be about keeping the result consistent with a DIV of negative numbers or something! so then a div cant be directly replaced by a shift.
and. first shift the byte and then afterwards widen it to word
int w; w=*(unsigned char*)0xf3b0>>2;
;app.c:239: int test() { ; --------------------------------- ; Function test ; --------------------------------- _test:: ;app.c:241: w=*(unsigned char*)0xf3b0>>2; ld hl, #0xf3b0 ld l, (hl) srl l srl l ld h, #0x00 ;app.c:242: return w; ;app.c:243: } ret
in C a char may be signed or may be unsigned, one doesnt know.