-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed BFINS instruction when width+offset > 32 #87
base: master
Are you sure you want to change the base?
Conversation
For reference, the MAME code for this block is:
Should that mask_byte line also be updated? |
Hi,
Good find. I think you're right. Not a problem in the code I'm running
because of the context I'm in. But definitely to consider.
I'm going to compare more code between MAME and musashi. Those selfish MAME
bad boys didn't port bugfixes back in musashi, even if musashi was the base
of MAME 68k machines in the first place.
I guess that the merge with MESS helped MAME to improve the 68020+
instruction set by emulating Amigas and Macintoshes. A thing that musashi
could not do by itself unless exposed to a lot of code pieces.
I'm going to compare the code more widely if I can, because there may be
more corrections. I'm pretty sure that the thing you mentionned is one of
them, and there's also other bitfield instriuctions which have the same
issue, I'm sure of that by checking WinUAE relevant code.
Regards
Le mar. 22 mars 2022 à 05:55, J Booth ***@***.***> a écrit :
… For reference, the MAME code for this block is:
https://github.com/mamedev/mame/blob/54442a7a5b4b69ea667a5ce1051d454bb5d22f43/src/devices/cpu/m68000/m68k_in.lst#L1786-L1792
if((width + offset) > 32) {
mask_byte = MASK_OUT_ABOVE_8(mask_base) << (8-offset);
insert_byte = MASK_OUT_ABOVE_8(insert_base) << (8-offset);
data_byte = m68ki_read_8(ea+4);
m_not_z_flag |= (insert_byte & mask_byte);
m68ki_write_8(ea+4, (data_byte & ~mask_byte) | insert_byte);
}
Should that mask_byte line also be updated?
mask_byte = MASK_OUT_ABOVE_8(mask_base) << (8-offset);
—
Reply to this email directly, view it on GitHub
<#87 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD54CEQPLWCLXPNDOTVSHKDVBFHD3ANCNFSM5QT5SHHQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
That sounds like a good plan. You will probably find some other fixes in the MAME code that could be helpful. Yes, it is too bad some of the improvements in MAME were never backported to Musashi. But at this point, there seems to be a significant divergence as to how some of the code is written between the two. But something like these bitfield fixes could have been backported fairly easily. |
I definitely found more fixes in the MAME code, and also found bugs in the
MAME code that I have fixed in musashi earlier :) (TRAPT/TRAPCC
instructions that are rarely used I admit)
I have started backporting fixes in the scalar section (bitfield
instructions), and I also noticed mainly enhancements in the fpu section,
integrating our work on musashi (
mamedev/mame@27ad7de),
but someone got even further and added a ton of other missing addressing
modes so I'm going to backport those to musashi as well.
Maybe writing a python script to "deface" MAME code and refactor it back to
musashi base types & macros (and back) would be useful. At least to be able
to compare sources and merge them back & forth easily.
Le mar. 22 mars 2022 à 22:52, J Booth ***@***.***> a écrit :
… That sounds like a good plan. You will probably find some other fixes in
the MAME code that could be helpful.
Yes, it is too bad some of the improvements in MAME were never backported
to Musashi. But at this point, there seems to be a significant divergence
as to how some of the code is written between the two. But something like
these bitfield fixes could have been backported fairly easily.
—
Reply to this email directly, view it on GitHub
<#87 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD54CEWGHB5SJ6ZYZKB5SUDVBI6IHANCNFSM5QT5SHHQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I have made a python script to change MAME base types to the ones musashi
used. It makes diffs much clearer, and merges much easier. And there are a
lot of fixes indeed, in pack/unpack instructions too, and a lot of FPU
stuff.
put MAME cpu files (m68kfpu.cpp and m68k_in.lst) in "in" folder and get
reworked files in "out" folder. There are still a lot of differences in the
structure (c vs c++, lst vs c) but the code is identical where it's possible
Le mar. 22 mars 2022 à 22:52, J Booth ***@***.***> a écrit :
That sounds like a good plan. You will probably find some other fixes in
the MAME code that could be helpful.
Yes, it is too bad some of the improvements in MAME were never backported
to Musashi. But at this point, there seems to be a significant divergence
as to how some of the code is written between the two. But something like
these bitfield fixes could have been backported fairly easily.
—
[MAME2musashi.zip](https://github.com/kstenerud/Musashi/files/8335383/MAME2musashi.zip)
… Reply to this email directly, view it on GitHub
<#87 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD54CEWGHB5SJ6ZYZKB5SUDVBI6IHANCNFSM5QT5SHHQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
This is great work that you are doing and would be a real asset to get moved back into musashi! |
This is a corner case of BFINS instruction when width + offset > 32 (width = 0 makes it 32 too)
To test :
D0 = $275
A0 = some address
BFINS D0,(a0){4:0} => A0 holds: X0000027, A0+4 holds: 5XXXXXXX
Without the correction, second longword is incorrect because shifting isn’t taken into account and we get
A0 holds: X0000027, A0+4 holds: 75XXXXXX
UAE core does that shifting, musashi did not. Relevant uae code:
We had this issue in our application and that change fixed functionnal behaviour
BFTST and BFSET and BFCLR instructions probably have the same issue, but we're not using them in our application with a width > 32 so we didn't took the time to work on a fix.