• 9 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • Shit… sounds about right.

    Although the /water is hot enough/ scenario could be addressed mechanically: bigger water tank, lower heating element raised and the heat pump heating the bottom exclusively where it could /always/ add heat because it would never be hot enough at the bottom.

    (edit) after some thought, it would superficially make sense to get a factory water heater (tank) and not tamper with it at all. Just have a PV-powered HP heat water before it enters the stock water heater (in a tank or coil). Thus there could be 2 heat pumps (but for economy the stock tank would just be a simple non-heat pump type thus 1 HP). I guess this is still a dead idea anyway if it’s true that a PV cannot simply directly connect to a compressor.




  • Because of Google’s DoS attack, those of us in the open free world cannot reach Youtube. So would someone please explain the concept in text?

    Is this it? → https://utahforge.com/2022/12/30/did-you-know-that-scottish-clubbers-use-dance-beat-to-generate-heat/

    Seems like a great idea. Like using the body heat to boost geothermals.

    Someone plz tell Massive Attack about this. Massive Attack has gone gung-ho on eco-friendly festivals (in places inaccessible by car). They might want to throw some indoor events with this tech.

    (edit) from the article:

    “An experienced DJ could get up to 600 watts with the right song at the right time.”

    So IIUC that’s like ~1½ solar panels getting a full dose of UV, correct? I guess that’s not much. But nonetheless not something to throw away either. So during the day solar panels on the roof could heat the ground pipes and during the night the clubbers keep the system powered.

    Would be fun for that power output to be measured, and then that power measurement could be a performance index on each DJ. You pay the DJ according to the power output they can produce. Though I guess that would screw over the ambiant / trip-hop DJs.


  • Have they thought this through? To install batteries that are much heavier than what the bus trip requires makes the bus less efficient. Research in the UK found that a bus carrying 5 people is about as efficient as 5 cars each carrying 1 person. That’s because of the weight of the bus. So the goal should be to fill the bus with people, not excessive batteries and overhead that need not move back and forth from A to B which then requires more riders to maintain the same efficiency.

    Sure they need to store energy for to smooth out peak grid consumption but probably smarter to do that with stationary batteries – if they must use batteries. Another way to store energy: pump water to the top of a mountain and open a dam that turns a hydro turbine when they need the energy back.

    from the article:

    Pollution from buses and other vehicles contributes to chronic asthma among students, which leads to chronic absenteeism.

    Seems like a stretch. Even if they can attribute chronic absenteeism to air pollution and keep a straight face, moving the pollution of a fleet of buses that makes 2 trips/day from the street to the power plant isn’t going to change the absenteeism by reducing asthma. This claim only signals a bit of desperation to get support.



  • Well that depends on how equipped you are. One cool thing about compressors is you can straight up connect a PV directly to a compressor with no voltage regulators or anything. So if you have a simple setup like that, I can see up front cost effectiveness in storing ice. But if you already have batteries, and thus voltage regulators and all the costly intermediate components to make that possible, then I would agree… I might rather store it in lead acid batteries as that would be more versatile.




  • Consider this excerpt:

    When the grid is extremely stressed, utility companies are sometimes forced to shut off electricity supply to some areas, leaving people there without power when they need it most. Technologies that can adjust to meet the grid’s needs could help reduce reliance on these rolling blackouts.

    So grid-powered a/c can give the grid relief at peak times with this tech.

    But indeed this tech on a PV-powered compressor seems sketchy. There are probably moments when the sun is hitting hard but the temp has not climbed up yet (sunrise) in which case it would be useful to store the energy. But I’m struggling to understand how the complexity of the system would be justified considering the overall efficiency is reduced as well. I wonder what proportion of time this system would be working in storage mode. If sunrise is 9am and peak heat is 2pm, maybe there’s ~2—4 hours of storage time potential.

    OTOH, consider someone with a slightly underpowered PV. Maybe the energy storage can compensate for peak heat times when the PV output may be insufficient. Perhaps it would enable homeowners to spend less on PV panels.








  • You question forced me to revisit this and take a closer look. I have in my notes “If someone’s fingerprint is untrusted, they will get an encrypted msg that they cannot read.” So I entered a 1:1 window with the one person who only ever gets errors from me, entered /omemo fingerprint, and it simply showed the person’s fingerprint. Then I did the same for someone who has fewer issues with me, and printed next to their fingerprint is “(trusted)”. Ah ha! The other acct has an untrusted fingerprint and Profanity does a shitty job of informing the user. The absense of a “(trusted)” when asking for the fingerprint is the crucial indicator.

    To answer your question, I think keys are managed automatically. I never had to add a key. But I have had to trust fingerprints. In the new version of profanity it’s possible to enter /omemo trustmode blind. That would also solve my problem but I don’t want to be sloppy. So I have to guide the other user to their own fingerprint and confirm it.

    (edit)
    Well this is bizarre. There are a couple people who I can talk to in Profanity just fine with OMEMO enabled, and their fingerprint also lacks the “(trusted)” next to it. Yet my trustmode is “manual”.




  • The server is snikket.chat.

    I am not sure what causes the OMEMO error though as I am not a iOS user.

    I believe Profanity is mostly to blame for those errors. Profanity loses track of keys and fingerprints of other users, and I think what it does is encrypts the msg to myself, then transmits it without encrypting to the recipient. Then the recipient gets a msg that’s encrypted to others but they cannot decrypt it. Then to worsen matters it seems XMPP uses the same incorrect error message for many different situations. Profanity really needs to change so if any of the recipients keys are not found, it should refuse to send the msg. I see a bogus error on my end as well, and the fix is to disable OMEMO the re-enable it (/OMEMO end; /OMEMO start).

    In any case, thanks for the suggestion. I’ll see if I can get someone to try that app. I cannot be fussy about features. I really just need text msgs to work.



  • “Just searching the code where the address book API is used” most certainly does not give you increased confidence.

    That’s the starting point. It only takes 5 minutes to get there and find the object of interest. If you don’t spend 10-30 minutes more to see how the object is used, you’re doing it wrong. And if you try to read every single line of code in the project, you’re also doing it wrong.

    Obfuscation is not that difficult.

    Obfuscation is even easier to spot than to create, which on that basis alone would be good grounds to reject a package.

    You can only possibly gain confidence if you fully understand every single line of code.

    As I said, you need not read every single line of code. Just the code touching the address book.

    I ignored it because it’s idiotic. Google isn’t and shouldn’t be building code for you unless you pay for it.

    It’s looking more clear that English is not your first language. You continually fail to comprehend what I’ve said, which was the complete opposite of this comment, after you suggested yourself that a code review effort is that of a new hire onboarding effort.

    One more time: a company having people review specific code for a specific purpose does not in any way resemble an adversarial code review against bad actors.

    Again, that is not the purpose of the code review. If the purpose is to generally find malicious code, that’s a very different criteria than /not exporting an address book/. And if you move the goal posts to that mission, you have no fucking chance to do that with the simple black box analysis you’re advocating.

    There are no parallels. A code review gives you literally zero confidence that the writer isn’t malicious

    A code review is the absolute cheapest most effective way to find malicious code, if that’s your new goal. You will not find malicious code with any confidence by looking at a TLS traffic tunnel and playing with the app as a user. You can see that the app connects to the Snikket server and you can see that blobs are passed back and forth, which is expected anyway. From there, you have to guess from the timing and payload sizes that something is off, at which point you still really know fuck all. It’s a lot of effort to reach insufficient confidence to condemn the app.

    unless you comprehensively understand every single line.

    Clearly you’ve never written software. Malicious code does not affect every single line nor does finding malice need an understanding of every single line. Bugs would never be found on any large project if that were true. Every code review I’ve performed has been narrow in scope and yet I still find non-conformant code. A developer can work on a project for ~10-20 years of their life and still only see a small fraction of the code. Yet they still discover bugs in very little time. If you think you need to look at every single line, I suggest avoiding the software career path.

    Open source project security is entirely and exclusively reputational.

    Reputation matters whether a project is FOSS or not. But if it’s closed-source, reputation is all you have. Of course it’s nonsense to claim FOSS code cannot be reviewed by anyone who cares to step beyond reputation.


  • An organization reviewing its own code is not the same, or similar in any way, to an organization reviewing a large volume of external code for malicious intent.

    This is neither of those cases. This is trivially searching the code for where the address book API is called, and inspecting only the relevant code to that object for a specific usage. If you review the whole volume of code for the entire application, you’re doing it wrong. It’s trivial and for the reasons I’ve already explained, less effort than dynamic analysis and traffic analysis.

    And it doesn’t work for a wide variety of reasons (including the one I already gave you that binaries don’t provide you any guarantees that they’re from the source).

    And you apparently missed the response because you’ve neglected to address it. It was a defeated claim.

    Onboarding is universally slow because new people take weeks to months to actually meaningfully understand big projects.

    You’re thinking about hiring heads to work on code they need to understand in depth in order to edit the code. That’s not the case here. Code reviews are much cheaper than onboarding developers.

    Again, you’re asking for FOSS code to get some special treatment and bypass the requirements already in place.

    Again, no exemption has been requested. Google is either smart enough to make use of info at their disposal, or they are not. (answer: they are not).

    It’s completely absurd, because every single one of those tests would still be unconditionally mandatory to get any kind of actual confidence in security.

    Only if you do it wrong. A code review gives more confidence about what happens with the address book than testing. Only a fool would needlessly spend money on the more costly and redundant black box approach which yields results (guesswork!) with less confidence¹. Sure you can also do the black box analysis but that’s just wasting money when the bar has already been cleared. You would do both if lives depended on the code, but such standards are far above Google’s standards.

    Choosing to skip them because someone in India skimmed the code would be way past gross negligence.

    You’re still not getting it. No one advocates for an exemption. You need to get that out of your head. A code review is a way to more cheaply do the verification with higher confidence, not to bypass it.

    ¹ Hence why Google failed many times to get it right.


  • A. Code review doesn’t work.

    You’re doing it wrong.

    B. Code review takes a very large amount of highly qualified man hours to not work.

    Not if a machine does it. And even if they use humans, it takes even more man hours to do the alternative dynamic analysis and traffic analysis. Code review saves countless man hours even if done 100% manually by humans.

    C. Requiring review of proprietary code exposes Google to a crazy amount of antitrust and IP liability. Again, to not work.

    Not applicable to FOSS code.

    Code review doesn’t happen because it’s a laughably stupid idea that has virtually no chance of being beneficial in any way. It’s not an oversight.

    Code reviews happen at every organisation I have worked for to catch unwanted code before deployment and testing. The reason we review code before testing is because it’s cheaper to review code than to test it. It’s laughably stupid to think code review doesn’t work only to then to spend more money on verification tests.


  • The issue they’re complaining about is that they’re being held to additional standards because they ask for a sensitive permission.

    That’s not Snikket’s complaint. Snikket naturally satisfies the standards at hand because they do not export address book data, so they have no reason to object to the standards Google is failing to verify. Their complaint is rightfully about Google’s incompetence in evaluating their compliance. It’s clear from Snikket’s account what a shit show it is at Google who failed copious times to evaluate their software.

    There’s nothing more terrible in the position of a software repository than the incompetence of neglecting to review code as part of the acceptance process. I can’t think of a more foolish policy than to ignore the code of software for which you are trying to endorse the quality of.