• Jessica@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I don’t use the current AI, specifically because it isn’t open source. Could I audit the code of an open source AI? Certainly not; it’s way over my head. However there would be an opportunity for experts to examine the source and report their findings. Currently? Black box, so no thanks.

    There are so many projects that could become possible through novel use of an open source AI (or whatever it should actually be called).

    Judging by the seemingly exponential improvements and integration, opinions such as ours are a grain of sand in Death Valley.

    • Sanyanov@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      11 months ago

      To be completely fair, even open-source AIs are a little bit of a black box due to the way neural networks work - but I’d greatly appreciate if we at least knew the parameters on which it is trained.

      It is absolutely possible to train all sorts of biases in a closed-source AI, and that’s what would be very hard in an open-source model. You can roughly set up outputs at whatever. In other ways, using open-source practically removes the malicious human factor (without removing positive impact)

      Open-source models also can’t be restricted, paywalled or limited in any meaningful way, which is also vital.