• 0 Posts
  • 30 Comments
Joined 11 months ago
cake
Cake day: July 25th, 2023

help-circle


  • Perhaps we’re talking to different points. Parent comment said that investors are always looking for better and better returns. You said that’s how progress works. This sentiment is was my quibble.

    I took the “investors are always looking for better returns” to mean “unethically so” and was more talking about what happens long term. Reading your above I think you might have been talking about good faith.

    In a sound system that’s how things work, sure! The company gets investment into tech and continue to improve and the investors get to enjoy the progress’s returns.


  • You’re conflating creating dollar value with progress. Yes the technology moves the total net productivity of humankind forward.

    Investing exists because we want to incentive that. Currently you and the thread above are describing bad actors coming in, seeing this small single digit productivity increase and misrepresenting it so that other investors buy in. Then dipping and causing the bubble to burst.

    Something isn’t a ‘good’ investment just because it makes you 600% return. I could go rob someone if I wanted that return. Hell even if then killed that person by accident the net negative to human productivity would be less.

    These bubbles unsettle homes, jobs, markets, and educations. Inefficiency that makes money for anyone in the stock market should have been crushed out.


  • I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.

    One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.

    Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.

    Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.

    IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)







  • Yeah, I saw some of your other comments regarding the 3rd world tree offsets. Definitely the shady recs I was talking about. The shady req pushers drive me nuts, the legislation on what you can and can’t count is usually pretty clear but they’re just spam. I would love to see govt action against them

    Usually net zero claims are claimed by a certain date so I’m unsure if you’re saying your company thinks it’s at net zero (very few are as outlined in most govts reporting reqs) or they have a commitment and are claiming Z% progress. This is also a good example of the balkinization of terms. Net zero to me means “company no longer emitting in scope 1 and 2 by x year”.

    If it’s the latter that’s where the whole review and pushback on net zero as a marketing term is, some of these companies just went out and bought up all the renewable energy on the market, didn’t touch anything they cant just buy and are now bitching they don’t get to claim net zero progress. I agree with the committees, fuck those companies. If you’re not putting in the work then you shouldn’t be able to claim that.

    Re: the new system, you’re describing the concept of additionallity to the power grid as well. We will ultimately reach a point where we’ve only got thinga like new growth forests or some other carbon sequestration but we’ve got a really long way to even shut off the flow of CO2.

    Right now a lot of focus is on getting scope 1 and 2 emissions to 0 (what you consume vs purchase for raw energy) vs scope 3 (all your suppliers and users emissions too). I believe you may be describing a Scope 3 CO2 neutral. In which case I agree, but goodluck getting most of our politicians right now to agree.

    It’s not the perfect end goal but the logic goes that if every company has to get to net zero via supply chain and adding to the grid, we might see scope 3 hit zero with additional crack down on laggards?

    Editing: just to add that interim usage of biogenic fuels is a good way to cut CO2 and only release CH4 and N2O in very very limited quantities (from what I’ve seen usually less than those in non biogenic sources). theoretically committing capital to plant or grow these sources now could be used to “reduce” carbon impact in future supply chains. Probably has some issues.




  • Hello, companies have not be net zero for years? In fact the US is anticipating increased reporting reqs on green house gas emissions. What you might be referring to is the fact that some companies went the “easy route” and bought a bunch of these sketchy recs and are now being told by certain committees they cannot claim carbon neutral. I want to say it was CDP who was having these discussions?

    The biggest issue I think we see from a clarity standpoint is that there is currently balkanization over industry terms. CDP may say that carbon neutral claims have to be proceeded by operations decarbonization and then the remainder can be bought down but the layperson doesn’t see that.

    Layer in that countries have to go by each jurisdiction on what they report in (Businesses in the EU have one emissions factor set, another in the UK has another, Asia a third etc) And you get some confusing estimates.

    I promise you though, most companies may have committed to Net0 but a very large portion ended up buying RECs to offset. The arguments that are happening now are good because that’s the easy way and once everyone is doing it that doesn’t work.

    Back to carbon sequestration, we are repeatedly seeing that new hype developments in this space are bunk or generate more carbon then they develop OR are too hard to track and prove (can’t think of examples for that one, forest?).

    Some of the only ways companies are decarbonizing right now are greening of the grid and purchases of emissions free energy. Unless we spend more time adding to the grid with electricity that will not generate emissions, we will never be able to hit net zero. (Without a significant cultural change in consumption).

    Pragmatically I think we have a lot to be gained on focusing on logistic improvements via a cost of carbon built in (to later be force spent on recs or sequestration tech/R&D). Consider that some companies buy an entire fleet of gas and then ship that overseas for their fleet simply because it makes their audit and accounting easier. That is such a net deficit in energy we’re producing with that fuel it’s an oxymoron. (I mean this as smaller distro networks will always be less efficient)

    Sorry I didn’t mean to wall of text you but I find this stuff fascinating.




  • I haven’t been in decision analytics for a while (and people smarter than I are working on the problem) but I meant more along the lines of the “model collapse” issue. Just because a human gives a thumbs up or down doesn’t make it human written training data to be fed back. Eventually the stuff it outputs becomes “most likely prompt response that this user will thumbs up and accept”. (Note: I’m assuming the thumbs up or down have been pulled back into model feedback).

    Per my understanding that’s not going to remove the core issue which is this:

    Any sort of AI detection arms race is doomed. There is ALWAYS new ‘real’ video for training and even if GANs are a bit outmoded, the core concept of using synthetically generated content to train is a hot thing right now. Technically whomever creates a fake video(s) to train would have a bigger training set than the checkers.

    Since we see model collapse when we feed too much of this back to the model we’re in a bit of an odd place.

    We’ve not even had a LLM available for the entire year but we’re already having trouble distinguishing.

    Making waffles so I only did a light google but I don’t really think chatgpt is leveraging GANs for it’s main algos, simply that the GAN concept could be applied easily to LLM text to further make delineation hard.

    We’re probably going to need a lot more tests and interviews on critical reasoning and logic skills. Which is probably how it should have been but it’ll be weird as that happens.

    sorry if grammar is fuckt - waffles