One or two models have increased in accuracy. Meanwhile all the grifters have caught on and there’s 1000x more AI companies out there that are just reselling ChatGPT with some new paint.
One or two models have increased in accuracy. Meanwhile all the grifters have caught on and there’s 1000x more AI companies out there that are just reselling ChatGPT with some new paint.
A Raspberry PI should be fine for direct play, but it doesn’t really have the processing power to transcode. Check to see which mode you’re in.
If you want the ability to live transcode, you’d probably have better luck with an old laptop or PC with a dedicated GPU (Even the lowest end ones have the same video encoding hardware in each generation, I use a GTX 1050).
Who else downloaded LimeWire Pro using LimeWire?
This is measuring desktop market share only. You can click to see all platforms where you can see Windows seems to be losing most of its market share to Android in the last few months.
All I see now is blonde, brunette, redhead.
In theory it makes it possible for other games to use the same items to make stuff in their games (I doubt this in practice)
I’ve heard this before, but there’s literally nothing preventing games from setting up some shared items on their own without NFTs. Nobody does it because companies want to keep their IP, and worrying about external items would be a nightmare to balance.
NFTs solve like 1% of the problem of sharing items. So much more goes into making them actually work. For example: NFT id 5551337 is owned by the player: now what? How do you figure out what 3d model to render? What actions can you perform? How does it integrate with other systems? All of that is going to have to be custom for every game involved on a per-item basis.
Ah yes… several years ago now I was working on a tool called Toxiproxy that (among other things) could slice up the stream chunks into many random small pieces before forwarding them along. It turned out to be very useful for testing applications for this kind of bug.
And that’s where Release with debug symbols comes in. Definitely harder to track down what’s going on when it skips 10 lines of code in one step though.
Usually my code ends up the other way though, because debug mode has extra assertions to catch things like uninitialized memory or access-after-free (I think specifically MSVC sets memory to 0xcdcdcdcd
on free in debug mode).
That’s definitely a non-trivial amount of data. Storage fast enough to read/write that isn’t cheap either, so it makes perfect sense you’d want to process it and narrow it down to a smaller subset of data ASAP. The physics of it is way over my head, but I at least understand the challenge of dealing with that much data.
Thanks for the read!
Neat, thanks for sharing. Reminds me of old mainframe computers where students and researchers had to apply for processing time. Large data analysis definitely makes sense for C++, and it’s pretty low risk. Presumably you’d be able to go back and reprocess stuff if something went wrong? Or is more of a live-feed that’s not practical to store?
It really depends what you’re doing. The last big project I did with C++ templates was using them to make a lot of compile-time guarantees about concurrency locks so they don’t need to be checked at runtime (thus trading my development time for faster performance). I was able to hide the majority of the templates from users of the library, and spent extra time writing custom static_assert messages.
C++ templates are in fact a compile-time turing complete language, as crazy as that sounds.
Yep, sadly I’ve been exposed to a few such codebases before. I certainly learned a lot about how NOT to design a project.
You’ve been at it longer than I have, but I’ve already had coworkers look at me like I’m a wizard for decoding their error message. You do get a feel for where the important parts of the error actually are over time. So much scrolling though…
Yep, I learned about this exact case when I got my engineering degree.
I guess you’ve never seen some of the 10-page template errors C++ compilers will generate. I don’t think anything prepares you for that.
I’m not sure how I feel about someone controlling an X-ray machine with C++ when they haven’t used the language before… At least it’s not for use on humans.
I understand what you mean. Water vapour (i.e. clouds, fog, the visible part of what comes from boiling water which any normal person would call steam) vs Gaseous water (i.e. most of the atmosphere, and the non-visible part of boiling water also called steam).
Vapes work by boiling PG/VG which starts as a liquid (i.e. the juice), and generates both vapourized and gaseous PG/VG. If it was water, any normal person would consider this steam. This isn’t a chemistry or physics class.
I don’t think there’s a need to so pedantic here. Water vapor is the visible part of steam, and for the purposes of this discussion, we’re talking about boiling liquids, so I don’t think there was any miscommunication by using the word “steam”
Our data suggest that the flavorings used in e-juices can trigger an inflammatory response in monocytes, mediated by ROS production, providing insights into potential pulmonary toxicity and tissue damage in e-cigarette users.
Well, I guess that’s a point against flavored vapes. I really wish there were more studies, because presumably not all flavorings would have the same effect. A comparison with unflavored e-juice would have been great.
That’s certainly a problem. It’s one of the big reasons I think THC vapes should be both legal and regulated. In the states were it is legal, there’s strict inventory tracking every step of the way.
Admittedly it’s a lot harder to get people on board with regulating drug-free vapes, but I think it would be a good idea to have guarantees about what you’re consuming just like food.
I have absolutely no clue what my highschool locker combination is, but I guarantee you if you handed me the lock, I could open it first or second try. That muscle memory is burned deep into my hands, and it’s been over 10 years.