

Start working on Vita emulation, you cowards.
Start working on Vita emulation, you cowards.
Oh, let’s not have the Masto instance chat here. Sure, the UX for onboarding is terrible and the community’s obsession with what is ultimately a trivial concern is a problem, but that’s not a problem in search of a technical solution. Masto would have stood a better chance if it just defaulted to Mastodon.social by default, because end users shouldn’t have to know or care what instance they are using on first contact. The only reason fedi advocates obsess about this to the point of borking the most important bit of social media UX is the fiction that all instances “deserve” the same level of discoverability for some reason.
Also, invite-only instances are already a thing, at least on Masto, and as far as I can tell nothing keeps you from making a new federated app that requires invites, so this feels like a bit of a non-issue anyway.
Well, for one thing, it’s part of a wider trend of misreporting about AI. For another, the more interesting, meaningful angle here would be why the (frankly very simplistic) research of the BBC is mismatched with the supposedly more rigorous benchmarks used for LLM quality testing and reported in new releases.
In fact, are they? What do they mean? Should people learn about them and understand them before engaging? Probably, yeah, right? But the BBC is saying their findings have “far reaching implications” without engaging with any of those issues, which are not particularly obscure or unknown in the field.
The gap between what’s being done in LLM development, what is being reported about it and how the public at large understand it is bizarre and hard to quantify. I believe once the smoke clears people will have some guilt to process about it, regardless of what the outcome of the hype cycle ends up being.
Wow, what sort of advanced techniques of investigative journalism did they deploy? Use the thing for five minutes and count?
I’m not even a big hater of LLMs and I could have told you that for free.
I rest my case, I suppose.
I’m definitely over being emotionally invested in the consequences of their entitlement within the US. I, unfortunately, like the rest of the world, don’t get to be over the consequences elsewhere.
They are so mad that democrats weren’t a sufficiently exciting alternative to an outright fascist entente. Livid, they are.
Ah, so you meant DLSS to mean specifically “DLSS Frame Generation”. I agree that the fact that both upscaling and frame gen share the same brand name is confusing, but when I hear DLSS I typically think upscaling (which would actually improve your latency, all else being equal).
Frame gen is only useful in specific use cases, and I agree that when measuring performance you shouldn’t do so with it on by default, particularly for anything below 100-ish fps. It certainly doesn’t make a 5070 run like a 5090, no matter how many intermediate frames you generate.
But again, you keep going off on these conspiracy tangents on things that don’t need a conspiracy to suck. Nvidia isn’t keeping vram artificially low as a ploy to keep people from running LLMs, they’re keeping vram low for cost cutting. You can run chatbots just fine on 16, let alone on 24 or 32 gigs for the halo tier cards, and there are (rather slow) ways around hard vram limits for larger models these days.
You don’t need some weird conspiracy to keep local AI away from the masses. They just… want money and have people that will pay them more for all that fast ram elsewhere while the gaming bros will still shell out cash for the gaming GPUs with the lower RAM. Reality isn’t any better than your take on it, it’s just… more straightforward and boring.
That is a rather astonishing mix of really granular quoting of more or less accurate facts and borderline conspiracy theorist level misinformation. You rarely see this stuff outside political channels, I’m… mildly impressed.
AMD absolutely does have stock in back rooms, largely because they have been doing a somewhat undignified dance of waiting to see what Nvidia does to decide what they’re pricing their current gen at. Most educated guesses out there are that they were going to price higher, were caught on the wrong foot with Nvidia’s MSRP announcement and had to work out how to re-price cards that were already in the retail channel. And now Nvidia is in turn delaying the 5070 to interfere with AMD’s new dates. Because both of these companies suck.
On the plus side for consumers, there’s some hope that the 9070 will be repriced somewhat affordably and that it won’t underperform against at least the 5070, if not the 5070Ti. We’ll see what reviews have to say about it.
Your summary of why the launch was so light includes some real stuff (yeah, partners struggle to match Nvidia’s aggressive pricing and have terrible margins), but that’s not why there was no stock of the 5090 (most reports suggest the GPUs were simply not being manufactured early enough to provide chips to anybody. 5080s were both more readily available and less appealing, so they’re easier to find, which kinda pokes big holes in that hypothesis. Manufacturing timelines seem to also explain why restocking will be slow.
I’m also very confused about why you’d “turn off DLSS”. Are you allowing people to use FSR, at least? That’s a weird proviso. The reason they would misrepresent the impact of MFG is obviously good old marketing. Even if AMD didn’t exist, the 40 series does and they have a big issue with justifying a lot of the 50 series line against it. With the 5080 falling well behind the 4090 they have a clear incentive for suggesting you can match the 4090 in cheaper cards. This doesn’t tell you anything about the performance of the 9070 one way or the other. It does tell you a lot of the performance of the 5080, though.
See, this is why this sort of propagandistic speech works so well, it takes for ever to even cover all the misrepresentations and all this is going to do is get you to double down on some of these unsubstantiated statements and turn it into a “matter of opinion”. It doesn’t even need to be on purpose, it’s just easier to produce than to counter.
Aaaand now I made myself sad.
In any case, here’s hoping the 9070 is a competitive option and readily available. They’ve apparently scheduled that delayed event for the 28th, so I’ll be curious to see what they bring to the table officially.
Oh, they’re absolutely not retaking a huge chunk of the dedicated GPU market. I think what’s realistic to expect if they have a good launch (readily available stock, competitive performance and price) is that they may regain a couple points of desktop install base and at least get to sell that they’re moving in the right direction instead of abandoning that space altogether. Maybe some growth on handhelds and competitive iGPUs for laptops and tablets so it makes sense for them to continue to develop the gaming GPU business aggressively at least.
With the 5070 at a 550 MSRP I wouldn’t be suprised to see AMD matching that for similar performance. Given all the delay shenanigans it’d be shocking for them to deliberately wait for the 5070 info and then launch with a more expensive part.
How much you end up having to pay to get one is anybody’s guess, of course, as MSRP is increasingly meaningless. Since they’ve had cards with retailers for a while and have been delaying there may actually be some stock at launch, though. We’ll see.
The idea that it would “smoke the 5070” and “nearly match the 5080” is probably just fanboyism or they wouldn’t have ducked out from directly pitching it after the 5070 reveal (and if they had a 500 dollar 5080 competitor they wouldn’t be cancelling their high end cards this gen).
In any case, it’s immensely dumb to fanboy for multibillion dollar chip manufacturers. I just hope people can buy good, affordable GPUs from multiple manufacturers at some point. I own GPUs from Intel, AMD and Nvidia and would really want them all to remain competitive in as many pricing segments as possible.
There are a couple of different things here. The 50 series launch was a bit of a paper launch, especially for the 5090. Scalping obviously happened, but the issue seems to have been very few cards being available, not as much high demand.
A different question is what the things that are available are worth and how they’re selling. It’s not impossible to find popular parts, but finding popular parts at MSRP is hard, with crazy markups changing day-to-day. I bought a CPU last year at MSRP and despite being a last-gen part that has since received a direct replacement, today it’s 100 bucks more expensive from the same retailer.
It’s not just an issue with location. Canadians tend to think they’re a lot more… culturally and politically European than they are.
And, again, there are lots of other alternatives before having to incorporate a whole-ass North American country with a landmass twice as big as the entire EU and located ten time zones away into a political and economic union designed to let trucks move things around easily.
Well, we’d have to redraw a bunch of maps, so at least it’d one up the dumb Gulf of Mexico distraction.
This seems pretty silly. There are tons of intermediary states Canada could reach without the weird torturing of geography. As the linked piece acknowledges way at the bottom, incidentally.
Yeah, but that’s solved through cross-login, which I’ve already seen used at least once in Pixelfed. Logging in with a pre-existing Masto account and importing your follows should have been the default solution, but I understand how the tech may not have been in place.
Yeah, but that’s bad, though.
Hypercustomization is way more of a hassle than a positive in most applications. I will take a couple of binary settings, I won’t design the UI for you.
My contention here is that the default UI for the *Bin is actually good.
You made me go check, and the signed-out site on an incognito tab does autoselect my browser-default dark theme. It looks much better than the light, incidentally, and the highlight to the Fedi tutorial link makes more sense in this context and is clearly restricted to signed-out users as a call to action/promo thing.
I don’t necessarily think the light theme is as awful as you’re claiming, and at a glance it definitely seems to be derived from Dark and not the othe way around. The more I look into it the less this seems like a universal problem with the UX in Mbin derivatives and more “the light theme has made some debatable color choices”.
Honestly, choosing whether to default to dark or light is pretty arbitrary, and pointless once the user sets a preference on login anyway. I’m not sure if there’s a reason you can’t default to OS/browser preference on a logged out user, but also don’t think it’s a big deal. Plus highlighting a “what is this app” tile makes more sense on the logged-out default, so there’s that as well.
Which is not to say that you’re wrong on the larger point. FOSS devs having the attitude that the UI is a secondary concern or wildly misrepresenting the ability of users to deal with friction or bad looks is an ongoing frustration. I guess engineers are more likely to attempt FOSS projects than UX designers.
I suppose that’s the point of interoperability. I would much rather support an ecosystem of apps doing the exact same thing to satisfy different UX preferences than the excruciating endless talk of “which of these identical instances all plugging to the same service should I arbitrarily joing as an identity-defining statement” you get in Masto.
Hold on, that doesn’t seem like an apples to apples comparison. You’re doing light theme in one and dark in another. The light theme has a different balance (also, ow, my retinas).
The default Fedia dark theme I am using does not look like that at all. Sure, both the main column and the tool column on the right have the same emphasis, but you still get hierarchy from both the relative sizes and the positioning (if you’re a left-to-right reader, at least).
Like I said…