• 1 Post
  • 490 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle

  • you’re the only one with your SSL keys. As part of authentication, you are identified. All the information about your device is transmitted. Then you stop identifying yourself in future messages, but your SSL keys tie your messages together. They are discarded once the message is decrypted by the server, so your messages should in theory be anonymised in the case of a leak to a third party. That seems to be what sealed sender is designed for, but it isn’t what I’m concerned about.

    Why do you think that Signal uses SSL client keys or that it transmits unique information about your device? Do you have a source for that or is it just an assumption?



  • And it’s I who should take a course in encryption and cybersecurity.

    Yes. I was trying to be nice, but you’re clearly completely ignorant and misinformed when it comes to information security. Given that you self described as a “cryptography nerd,” it’s honestly embarrassing.

    But since you’ve doubled down on being rude, just because I pointed out that you don’t know what you’re talking about, it’s unlikely you’ll ever learn enough about the topic to have a productive conversation, anyway.

    Have fun protecting your ignorance.


  • Nice try FBI.

    Wouldn’t “NSA” or “CIA” be more appropriate here?

    Well, if my pin is four numbers, that’ll make it so hard to crack. /s

    If you’re using a 4 number PIN then that’s on you. The blog post I shared covers that explicitly: “However, there’s a limit to how slow things can get without affecting legitimate client performance, and some user-chosen passwords may be so weak that no feasible amount of “key-stretching” will prevent brute force attacks” and later, “However, it would allow an attacker with access to the service to run an “offline” brute force attack. Users with a BIP39 passphrase (as above) would be safe against such a brute force, but even with an expensive KDF like Argon2, users who prefer a more memorable passphrase might not be, depending on the amount of money the attacker wants to spend on the attack.”

    If you can’t show hard evidence that everything is offline locally, no keys stored in the cloud, then it’s just not secure.

    If you can’t share a reputable source backing up that claim, along with a definition of what “secure” means, then your claim that “it’s just not secure” isn’t worth the bits taken to store the text in your comment.

    You haven’t even specified your threat model.

    BTW, “keys” when talking about encryption is the keys used to encrypt and decrypt,

    Are you being earnest here? First, even if we were just talking about encryption, the question of what’s being encrypted is relevant. Secondly, we weren’t just talking about encryption. Here’s your complete comment, for reference:

    I have read that it is self hostable (but I haven’t digged into it) but as it’s not a federating service so not better than other alternative out there.

    Also read that the keys are stored locally but also somehow stored in the cloud (??), which makes it all completely worthless if it is true.

    That said, the three letter agencies can probably get in any android/apple phones if they want to, like I’m not forgetting the oh so convenient “bug” heartbleed…

    Just so you know, “keys” are used for a number of purposes in Signal (and for software applications in general) and not all of those purposes involve encryption. Many keys are used for verification/authentication.

    Assuming you were being earnest: I recommend that you take some courses on encryption and cybersecurity, because you have some clear misconceptions. Specifically, I recommend that you start with Cryptography I (by Stanford, hosted on Coursera. See also Stanford’s page for the course, which contains a link to the free textbook). Its follow-up, Crypto II, isn’t available on Coursera, but I believe that this 8 hour long Youtube video contains several of the lectures from it. Alternatively, Berkeley’s Zero Knowledge Proofs course would be a good follow-up, and basically everything (excepting the quizzes) appears to be freely available online.

    it wouldn’t be very interesting to encrypt them, because now you have another set of keys you have to deal with.

    The link I shared with you has 6 keys (stretched_key, auth_key, c1, c2, master_key, and application_key) in a single code block. By encrypting the master key (used to derive application keys such as the one that encrypts social graph information) with a user-derived, stretched key, Signal can offer an optional feature: the ability to recover that encrypted information if their device is lost, stolen, wiped, etc., though of course message history is out of scope.

    Full disk encryption also uses multiple keys in a similar way. Take LUKS, for example. Your drive is encrypted with a master key. You derive the master key by decrypting one of the access keys using its corresponding pass phrase. (Source: section 4.3 in the LUKS1 On-Disk Format Specification (I don’t believe this basic behavior was changed in LUKS2).)


  • Its impossible to verify what code their server is running.

    Signal has posted multiple times about their use of SGX Secure Enclaves and how you can use Remote Attestation techniques to verify a subset of the code that’s running on their server, which directly contradicts your claim. (It doesn’t contradict the claim that you cannot verify all the code their server is running, though.) Have you looked into that? What issues did you find with it?

    I posted a comment here going into more detail about it, but I haven’t personally confirmed myself that it’s feasible.




  • The sender ('s unique device) can with 100% accuracy be appended to the message by the server after it’s received.

    How?

    If I share an IP with 100 million other Signal users and I send a sealed sender message, how does Signal distinguish between me and the other 100 million users? My sender certificate is encrypted and only able to be decrypted by the recipient.

    If I’m the only user with my IP address, then sure, Signal could identify me. I can use a VPN or similar technology if I’m concerned about this, of course. Signal doesn’t consider obscuring IPs to be in scope for their mission - there was a recent Cloudflare vulnerability that impacted Signal where they mentioned this. From https://www.404media.co/cloudflare-issue-can-leak-chat-app-users-broad-location/

    404 Media asked daniel to demonstrate the issue by learning the location of multiple Signal users with their consent. In one case, daniel sent a user an image. Soon after, daniel sent a link to a Google Maps page showing the city the user was likely in.

    404 Media first asked Signal for comment in early December. The organization did not provide a statement in time for publication, but daniel shared their response to his bug report.

    “What you’re describing (observing cache hits and misses) is a generic property of how Content Distribution Networks function. Signal’s use of CDNs is neither unique nor alarming, and also doesn’t impact Signal’s end-to-end encryption. CDNs are utilized by every popular application and website on the internet, and they are essential for high-performance and reliability while serving a global audience,” Signal’s security team wrote.

    “There is already a large body of existing work that explores this topic in detail, but if someone needs to completely obscure their network location (especially at a level as coarse and imprecise as the example that appears in your video) a VPN is absolutely necessary. That functionality falls outside of Signal’s scope. Signal protects the privacy of your messages and calls, but it has never attempted to fully replicate the set of network-layer anonymity features that projects like Wireguard, Tor, and other open-source VPN software can provide,” it added.

    I saw a post about this recently on Lemmy (and Reddit), so there’s probably more discussion there.

    since the sender is identified at the start of every conversation.

    What do you mean when you say “conversation” here? Do you mean when you first access a user’s profile key, which is required to send a sealed sender message to them if they haven’t enabled “Allow From Anyone” in their settings? If so, then yes, the sender’s identity when requesting the contact would necessarily be exposed. If the recipient has that option enabled, that’s not necessarily true, but I don’t know for sure.

    Even if we trust Signal, with Sealed Sender, without any sort of random delay in message delivery, a nation-state level adversary could observe inbound and outbound network activity and derive high confidence information about who’s contacting whom.

    All of that said, my understanding is that contact discovery is a bigger vulnerability than Sealed Sender if we don’t trust Signal’s servers. Here’s the blog post from 2017 where Moxie describe their approach. (See also this blog post where they talk about improvements to “Oblivious RAM,” though it doesn’t have more information on SGX.) He basically said “This solution isn’t great if you don’t trust that the servers are running verified code.”

    This method of contact discovery isn’t ideal because of these shortcomings, but at the very least the Signal service’s design does not depend on knowledge of a user’s social graph in order to function. This has meant that if you trust the Signal service to be running the published server source code, then the Signal service has no durable knowledge of a user’s social graph if it is hacked or subpoenaed.

    He then continued on to describe their use of SGX and remote attestation over a network, which was touched on in the Sealed Sender post. Specifically:

    Modern Intel chips support a feature called Software Guard Extensions (SGX). SGX allows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

    Later in that blog post, Moxie says “The enclave code builds reproducibly, so anyone can verify that the published source code corresponds to the MRENCLAVE value of the remote enclave.” But how do we actually perform this remote attestation? And is it as secure and reliable as Signal attests?

    In the docs for the “auditee” application, the Examples page provides some additional information and describes how to use their tool to verify the MRENCLAVE value. Note that they also say that the tool is a work in progress and shouldn’t be trusted. The Intel SGX documentation likely has information as well, but most of the links that I found were dead, so I didn’t investigate further.

    A blog post titled Enhancing trust for SGX enclaves raised some concerns with SGX’s current implementation, specifically mentioning Signal’s usage, and suggested (and implemented) some improvements.

    I haven’t personally verified the MRENCLAVE values for any of Signal’s services and I’m not aware of anyone who has (successfully, at least), but I also haven’t seen any security experts stating that the technology is unsound or doesn’t actually do what’s claimed.

    Finally, I recommend you check out https://community.signalusers.org/t/overview-of-third-party-security-audits/13243 - some of the issues noted there involve the social graph and at least one involves Sealed Sender specifically (though the link is dead; I didn’t check to see if the Internet Archive has a backup).


  • Message history won’t be fully fixed. It can’t be without storing message backups in some cloud somewhere (whether it’s to iCloud, Google Drive, Dropbox, or Signal’s servers) and Signal omits its message history from system backups on iOS and Android.

    iOS users are completely incapable of backing up their message history in the event of their phone being lost, stolen, or broken. This omission isn’t justified in any way, as far as I’m aware; I don’t know of any technical reason why following the exact same process as on Android wouldn’t work.

    Android users are able to back up locally via Signal, but that isn’t on by default, can’t be automated, needs to be backed up separately, requires you to record a 30 digit code to decrypt it, and has limitations on when it can be used for a restore (can’t restore on iOS, for example). See https://support.signal.org/hc/en-us/articles/360007059752-Backup-and-Restore-Messages for more details.

    Message history on linked devices - meaning iPads and desktop computers - is being improved, but it still won’t mean that a user who loses or trades in their phone as they get a new phone will be able to simply restore their phone from a system backup and restore their Signal message history. And even that isn’t anywhere near as easy as on Telegram, where a user can just log in with their password and restore their message history, no backup needed.

    It’s great that they’re improving the experience for linked devices, but right now that doesn’t actually help if you lose, break, or trade in your phone. Maybe they’ll later allow users to restore to a phone from a linked device or support backups on iPhones, but right now the situation with message history isn’t just an unfriendly UX, but one that is explicitly and intentionally unreliable for a huge portion of Signal’s user-base.


  • Also read that the keys are stored locally but also somehow stored in the cloud (??),

    Which keys? Are they always stored or are they only stored under certain conditions? Are they encrypted as well? End to end encrypted?

    which makes it all completely worthless if it is true.

    It doesn’t, because what you described above could be fine or could have huge security ramifications. As it is, my guess is that you’re talking about how Signal supports secure value recovery. In that case:

    1. The key is used to encrypt your contacts, profile name, group avatars, social graph, etc., but not your messages.
    2. Your key is only uploaded to the cloud if you have a recovery PIN or passphrase
    3. Your key is encrypted using your PIN or passphrase using techniques (key-stretching, storing in server secure enclaves) that make it more difficult to brute force

    The main criticism of this is that you can’t opt out of it without opting out of the Registration Lock, that it necessarily uses the same PIN or passphrase, and that, particularly because it isn’t clear that your PIN/passphrase is used for encryption, users are less likely to use more secure pass phrases here.

    But even without the extra steps that we can’t 100% confirm, like the use of the Secure Enclave on servers and so on, this is e2ee, able to be opted out by the user, not able to be used to recover past messages, and not able to be used to decrypt future messages.





  • Unless something has changed, it did. The page linked reads:

    And, obviously, this POC is open source, the code is publish here on our forge.

    The link takes you to their repos. The server repo has instructions on self-hosting directly on your server or with Docker. The app repo has code for both the iOS and Android apps. That’s good, because the iOS app at least doesn’t have a built-in way to select a different backend server.

    Whisper is by OpenAI and as far as I know they have not shared the training code, much less the data sets, so the best you can do is fine-tune the models they’ve provided.

    If use of Whisper is a problem, but the project is otherwise interesting to you, you could ask them to consider using a different STT solution (or allowing the user to choose between different options). I’m not aware of any fully open STT applications that are considered to be as capable as Whisper, but if you do, that would be great info to share with them.


  • Depends on your perspective. Would it be fine for Meta Threads to replace it? Threads supports ActivityPub, so in some ways it likely interacts better with the fediverse.

    If we agree that Threads isn’t a suitable replacement, then clearly there’s some criteria a replacement should meet. A lot of the things that make Threads unpalatable are also true of Bluesky, particularly if your concern relates to the platform being under the control of a corporation.

    On the other hand, from the perspective of “Twitter 2.0 is now a toxic, alt-right cesspool where productive conversations can’t be had,” then both Threads and Bluesky are huge improvements.



  • The rules text says it creates an area of darkness, and with your interpretation, it doesn’t, which means your interpretation is wrong. Yes, the ability could be written more clearly, but the logic for a reasonable way for it to function follows pretty cleanly. Your interpretation is not RAW or RAI.

    There’s a reply on RPG StackExchange that covers a similar line of logic to what I wrote above.

    Remember that Fifth Edition D&D is intentionally not written with the same exacting precision as games like M:tG. The game doesn’t have an explicit definition of magical darkness, but it’s pretty clear that the intent is for magical to trump mundane (when it comes to sources of light and darkness). Even the Specific Beats General section says that most of the exceptions to general rules are due to magic.


  • If you have normal darkness everywhere, there isn’t a reason to use it, but you don’t always have darkness everywhere. In fact, you generally don’t.

    Not all monsters with darkvision have access to light sources. Even if they do, they may need an action to use it or may be out of range. A torch or the light cantrip only has a 40’ range. If you collaborate on positioning with the caster, you can basically set yourself up to have advantage every turn thanks to the darkness, since as a ranged attacker you don’t have to stay within 40’ of your enemies.

    Also, Gloom Stalkers can’t see through Darkness like Warlocks can, so this effect is useful to them in a way that the Darkness spell isn’t.

    That all said, Tricksy wouldn’t do anything if it didn’t block nonmagical illumination, so it’s reasonable to run it as though it does. Sure, it still wouldn’t block even a cantrip, but it would block torches, lanterns, the sun, etc…

    And running it as though it doesn’t block nonmagical darkness results in nonsensical behavior. You’re in a torchlit chamber and use the ability - now there’s a cube of darkness, blocking the light of all four nonmagical torches. If you move one of those torches away and back, why would it suddenly pierce the magical darkness? If it wouldn’t, why would a new nonmagical light source?