Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 1 Post
  • 2.79K Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle






  • Possible, measuring the orbit will determine that likelihood. The article gives a few other formation possibilities as well. Finding a few other systems like this will help narrow down what exactly happened here. It doesn’t seem that impossible to me, not like the title implies, given that while the star is low mass for a star, it’s still a large mass, and the planet isn’t that huge (50% less mass than Saturn despite being a bit larger in size).

    This just sounds like an extension of our understanding of how things are in the universe similar to pre-Voyager thoughts on what they’d find from our own system’s planets and moons. What we found was each place was unique with its own fascinating discoveries and not “just another rock”. Seems we’re finding that out for other solar systems as well.


  • Reminder that while it’s not Lemmy, you can see the same stuff and more if you join Mbin. The power of the Fediverse is that there are lots of ways to do things, important for situations like this.

    Ironically, I haven’t looked into the current situation with .io lately, so I might be doing the same thing eventually. But being such a huge domain, there might be some adjustment to avoid dropping so many websites.




  • LLMs can be good at openings. Not because it is thinking through the rules or planning strategies, but because opening moves are likely in most general training data from various sources. It’s copying the most probable reaction to your move, based on lots of documentation. This can of course break down when you stray from a typical play style, as it has less to choose from in the options of probability, and only a few moves in there won’t be any more since there’s a huge number of possible moves.

    I.e., there’s no calculations involved. When you play a LLM at chess, you’re playing a list of common moves in history.

    An even simpler example would be to tell the LLM that its last move was illegal. Even knowing the rules you just told it, it will agree and take it back. This comes from being trained to give satisfying replies to a human prompt.


  • I understand the cross-posting issue, it’s something that comes with a federated discussion format and I don’t think anyone has come up with a great way to solve it without such a distributed effort. It’s ironic that before when there were so few instances (before and during the first Reddit migration) there was a concern that without cross-posting a lot there wouldn’t be enough growth and some communities might die out if they happened to be on a single failing instance. I’d rather have too much activity than none at all, at least you can filter or block the worse ones.






  • “Anybody who thinks LLMs are a direct route to the sort [of] AGI that could fundamentally transform society for the good is kidding themselves.”

    A quote in an article that still uses the generic “AI” to refer to LLM models, thus losing any credibility. Probably was written by an LLM - sorry, AI, since that’s what it means now. AI is popular jargon now to mean anything that seems like it’s thinking, only serious people use AGI/ASI and even they often slip up and say AI sometimes. Tainted word.

    I do think LLMs are/will be part of the tools needed for AGI, but alone, no, they aren’t processing what they’re being asked, so of course on anything more complex than their training they can go astray.






OSZAR »