I think ARM is their end goal, it’s really the only option for a handheld console, as today ARM is the only way you’ll get enough performance/power rate to make it both good on battery with good enough performance.
Win-win for everyone if they invest in an open source x86 to ARM project, similar like they did with Wine.
The Switch is more than proof enough that pretty much any modern game engine can compile to an ARM target with zero issues (though Nvidia’s low level APIs help, not sure about Qualcomm).
But there’s zero chance older PC games would ever be updated, and by older I don’t mean ancient, some AAA studios stop issuing updates in about one year after release.
So it all comes down to being able to emulate X86 on ARM… The best example we have is Apple, and games run but with a massive performance hit. Microsoft’s implementation is borderline unusable. I’m not sure what to expect from Valve.
Checkout Box86/64 and Fex-Emu. They both do x86 translation/emulation on ARM Linux and the results are wayy better than any reasonable expectations I had going in.
I wouldn’t say you get that much of a performance hit with Apple games when emulating X86. Rosetta works pretty great for games that are already on macOS but as X86 games. The problem is emulating for Windows games that are also on X86.
It really depends on the game. If the game was truly native, usually the Rosetta performance is good. A lot of games though are JIT, and running a JIT inside a JIT is terrible for performance. The good news is that a game already being JIT is probably easier to patch to be native, for example people have had success replacing the mono runtime used by terraria with a native one and seeing good performance improvements, or running Minecraft on a native JVM. The bad news is it doesn’t necessarily mean the developers will actually update the thing, and mods like this are unlikely to appeal to the vast majority of people.
AMD is getting there by optimizing the shit out of memory access and cache. RISC designs by nature have far simpler memory models. AMD has to throw tons of resources into making the x86 pig stay in the air, and they’re already flirting with a move towards ARM.
Most of the people who know how to keep that pig flying already work at AMD or Intel. They certainly don’t work at VIA Technologies (the third x86 company that nobody talks about, for a good reason). In contrast, any given Fortune 500 could probably hire an ARM team to make a custom chip for their needs provided they had a good enough reason.
Bruh just check, whenever Apple makes massive performance gains it’s on a new TSMC node.
I’m not gonna bullshit you and say AMD and Intel do nothing, sure they got some amazing tricks. But in the end it’s mostly TSMC making everyone’s chips faster and more power efficient.
New Nvidia GPU’s magically got power efficient. Why? Check the node of the 3000 series and the 4000. AMD is currently way less power efficient in GPU. Why? They’re not on the latest node like Nvidia.
What I’m getting at is there are factors that affect the broader market. Having more people and companies able to work on processors means greater possibility of variation, and therefore has an evolutionary advantage.
There are three x86 companies, and there’s not likely to be any others. VIA is barely worth talking about. AMD is currently killing it, but it wasn’t always that way. Over a decade ago, a combination of bad decisions at AMD, good decisions at Intel, and underhanded tactics at Intel made AMD nearly collapse. Intel looked smug on its throne, and sat on the same fundamental architecture and manufacturing node for a long time.
This was a bad situation for the entire computer industry. We were very close to Intel being all that mattered, and that would have meant severe stagnation. ARM (and RISC-V) being more viable helps keep that from happening again.
While partially true, the biggest problem is software compatibility. Most software is compiled and optimized for X86 and it won’t run on ARM unless recompiled.
How are we going to get everyone to jump ship? Apple had their magic emulation sauce but windows doesn’t seem to have that and especially not for Risc5
Much of what people do on computers these days is through a web browser. An even bigger market is servers, which often run Linux and can port things into ARM with less hassle.
The way windows ABI works is syscalls always should go through dynamic libraries first, while on linux syscalls do syscall instruction/*. How windows syscalls work allows project like WINE just implement those libraries that will do linux syscalls. No instructions translated.
But with other architectures story is different. You either make instruction decoder for processor, make interpreter or make binary translator. First is itanium-way, second is naive way and third is how everyone does. Third is basically compiling one machine code to another. It has overhead of, well, compiling one machine code to another. And it works badly with other JIT compilers.
*there is vDSO, which is dynamic library, that implements syscalls like getting time. It is totally optional.
One thing you can do is translate 3d APIs. This sometimes makes 3d consoles easier to emulate than 2d consoles. PS1 emulation was basically solved when SNES emulation was playable but still had noticeable bugs.
I think ARM is their end goal, it’s really the only option for a handheld console, as today ARM is the only way you’ll get enough performance/power rate to make it both good on battery with good enough performance.
Win-win for everyone if they invest in an open source x86 to ARM project, similar like they did with Wine.
The Switch is more than proof enough that pretty much any modern game engine can compile to an ARM target with zero issues (though Nvidia’s low level APIs help, not sure about Qualcomm).
But there’s zero chance older PC games would ever be updated, and by older I don’t mean ancient, some AAA studios stop issuing updates in about one year after release.
So it all comes down to being able to emulate X86 on ARM… The best example we have is Apple, and games run but with a massive performance hit. Microsoft’s implementation is borderline unusable. I’m not sure what to expect from Valve.
Checkout Box86/64 and Fex-Emu. They both do x86 translation/emulation on ARM Linux and the results are wayy better than any reasonable expectations I had going in.
A lot of that comes down to Unreal and Unity. They have targets built in for everything. Even a web browser if you want.
I wouldn’t say you get that much of a performance hit with Apple games when emulating X86. Rosetta works pretty great for games that are already on macOS but as X86 games. The problem is emulating for Windows games that are also on X86.
It really depends on the game. If the game was truly native, usually the Rosetta performance is good. A lot of games though are JIT, and running a JIT inside a JIT is terrible for performance. The good news is that a game already being JIT is probably easier to patch to be native, for example people have had success replacing the mono runtime used by terraria with a native one and seeing good performance improvements, or running Minecraft on a native JVM. The bad news is it doesn’t necessarily mean the developers will actually update the thing, and mods like this are unlikely to appeal to the vast majority of people.
Every year they are more likely to go RISC-V.
Nah ARM is barely more efficient than X86. As soon as AMD went TSMC 3nm they got almost similar power efficiency. As the Apple M chips.
Apples “magic sauce” is just being the first one on the new TSMC nodes.
AMD is getting there by optimizing the shit out of memory access and cache. RISC designs by nature have far simpler memory models. AMD has to throw tons of resources into making the x86 pig stay in the air, and they’re already flirting with a move towards ARM.
Most of the people who know how to keep that pig flying already work at AMD or Intel. They certainly don’t work at VIA Technologies (the third x86 company that nobody talks about, for a good reason). In contrast, any given Fortune 500 could probably hire an ARM team to make a custom chip for their needs provided they had a good enough reason.
Bruh just check, whenever Apple makes massive performance gains it’s on a new TSMC node.
I’m not gonna bullshit you and say AMD and Intel do nothing, sure they got some amazing tricks. But in the end it’s mostly TSMC making everyone’s chips faster and more power efficient.
New Nvidia GPU’s magically got power efficient. Why? Check the node of the 3000 series and the 4000. AMD is currently way less power efficient in GPU. Why? They’re not on the latest node like Nvidia.
What I’m getting at is there are factors that affect the broader market. Having more people and companies able to work on processors means greater possibility of variation, and therefore has an evolutionary advantage.
There are three x86 companies, and there’s not likely to be any others. VIA is barely worth talking about. AMD is currently killing it, but it wasn’t always that way. Over a decade ago, a combination of bad decisions at AMD, good decisions at Intel, and underhanded tactics at Intel made AMD nearly collapse. Intel looked smug on its throne, and sat on the same fundamental architecture and manufacturing node for a long time.
This was a bad situation for the entire computer industry. We were very close to Intel being all that mattered, and that would have meant severe stagnation. ARM (and RISC-V) being more viable helps keep that from happening again.
While partially true, the biggest problem is software compatibility. Most software is compiled and optimized for X86 and it won’t run on ARM unless recompiled.
How are we going to get everyone to jump ship? Apple had their magic emulation sauce but windows doesn’t seem to have that and especially not for Risc5
Much of what people do on computers these days is through a web browser. An even bigger market is servers, which often run Linux and can port things into ARM with less hassle.
People put far too much weight on games.
deleted by creator
Not sure what point you trying to make. Translation is just one of ways to emulate.
deleted by creator
This… Is not exactly how it works.
The way windows ABI works is syscalls always should go through dynamic libraries first, while on linux syscalls do syscall instruction/*. How windows syscalls work allows project like WINE just implement those libraries that will do linux syscalls. No instructions translated.
But with other architectures story is different. You either make instruction decoder for processor, make interpreter or make binary translator. First is itanium-way, second is naive way and third is how everyone does. Third is basically compiling one machine code to another. It has overhead of, well, compiling one machine code to another. And it works badly with other JIT compilers.
*there is vDSO, which is dynamic library, that implements syscalls like getting time. It is totally optional.
One thing you can do is translate 3d APIs. This sometimes makes 3d consoles easier to emulate than 2d consoles. PS1 emulation was basically solved when SNES emulation was playable but still had noticeable bugs.
deleted by creator