• 5 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle





  • I could understand method = associated function whose first parameter is named self, so it can be called like self.foo(…). This would mean functions like Vec::new aren’t methods. But the author’s requirement also excludes functions that take generic arguments like Extend::extend.

    However, even the above definition gives old terminology new meaning. In traditionally OOP languages, all functions in a class are considered methods, those only callable from an instance are “instance methods”, while the others are “static methods”. So translating OOP terminology into Rust, all associated functions are still considered methods, and those with/without method call syntax are instance/static methods.

    Unfortunately I think that some people misuse “method” to only refer to “instance method”, even in the OOP languages, so to be 100% unambiguous the terms have to be:

    • Associated function: function in an impl block.
    • Static method: associated function whose first argument isn’t self (even if it takes Self under a different name, like Box::leak).
    • Instance method: associated function whose first argument is self, so it can be called like self.foo(…).
    • Object-safe method: a method callable from a trait object.





  • It’s funny because, I’m probably the minority, but I strongly prefer JetBrains IDEs.

    Which ironically are much more “walled gardens”: closed-source and subscription-based, with only a limited subset of parts and plugins open-source. But JetBrains has a good track record of not enshittifying and, because you actually pay for their product, they can make a profitable business off not doing so.




  • But aren’t the GPUs used by AI different than the GPUs used by gamers? 8GB of RAM isn’t enough to run even the smaller LLMs, you need specialized GPUs with 80+GB like A100s and H100s.

    The top-tier consumer models like the 3090 and 4090 have 32GB, with them you can train and run smaller LLMs locally. But there still isn’t much demand to do that because you can rent GPUs on the cloud for cheap; enough that the point where renting exceeds the cost of buying is very far off. For consumers it’s still too expensive to fine-tune your own model, and startups and small businesses have enough money to rent the more expensive, specialized GPUs.

    Right now GPU prices aren’t extremely low, but you can actually but them from retailers at market price. That wasn’t the case when crypto-mining was popular