- cross-posted to:
- technology@rabbitea.rs
- technews@radiation.party
- cross-posted to:
- technology@rabbitea.rs
- technews@radiation.party
Over the past one and a half years, Stack Overflow has lost around 50% of its traffic. This decline is similarly reflected in site usage, with approximately a 50% decrease in the number of questions and answers, as well as the number of votes these posts receive.
The charts below show the usage represented by a moving average of 49 days.
What happened?
Amazing how much hate SO receives here. As knowledge base it’s working super good. And yes, a lot of questions have been answered already. And also yes, just like any other online community there’s bad apples which you have to live with unfortunately.
Idolizing ChatGPT as a viable replacementis laughable, because it has no knowledge, no understanding, of what it says. It’s just repeating what it “learned” and connected. Ask about something new and it will simply lie, which is arguably worse than an unfriendly answer in my opinion.
The advice on stack overflow is trash because “that question has been answered already” yeah, it was answered 10 years ago on a completely different version. That answer is depreciated.
Not to mention the amount of convoluted answers that get voted to the top and then someone with two upvotes at the bottom meekly giving the answer that you actually needed.
It’s like that librarian from the New York public library who determined whether or not children’s books would even get published.
She gave “good night moon” a bad score and it fell out of popularity for 30 years after the author died.
I don’t think that’s entirely fair. Typically answers are getting upvoted when they work for someone. So the top answer worked for more people than the other answers. Now there can be more than one solution to a problem but neither the people who try to answer the question, nor the people who vote on the answers, can possibly know which of them works specifically for you.
ChatGPT will just as well give you a technically correct, but for you wrong, answer. And only after some refinement give the answer you need. Not that different than reading all the answers and picking the one which works for you.
Of course older answers are going to have more uovotes if they technically work. That doesn’t mean it’s the best answer. It’s possible that someone would like to make a new, better, answer and is unable to because of SA restrictions on posting.
The kinds of people who post on SA regularly aren’t going to be the people with the best answers.
On top of that SA gives badges for uovoting and it’s possible other benefits I’m unaware of.
As we saw with reddit, uovotes systems can be inherently flawed, we have no way of knowing if that uovote is genuine.
Explains the huge swaths of bad advice shared on Reddit though. It’s shared confidently and with a smile. Positive vibes only!
What’s “Reddit”?
(I removed all my advice from there when it was considered “violent content” and “sexualization of minors”… go find your 3d printing, programming, system management and chemistry tips elsewhere, I did it anyway)
I hear you. I firmly believe that comparing the behavior of GPT with that of certain individuals on SO is like comparing apples to oranges though.
GPT is a machine, and unlike human users on SO, it doesn’t harbor any intent to be exclusive or dismissive. The beauty of GPT lies in its willingness to learn and engage in constructive conversations. If it provides incorrect information, it is always open to being questioned and will readily explain its reasoning, allowing users to learn from the exchange.
In stark contrast, some users on SO seem to have a condescending attitude towards learners and are quick to shut them down, making it a challenging environment for those seeking genuine help. I’m sure that these individuals don’t represent the entire SO community, but I have yet to have a positive encounter there.
While GPT will make errors, it does so unintentionally, and the motivation behind its responses is to be helpful, rather than asserting superiority. Its non-judgmental approach creates a more welcoming and productive atmosphere for those seeking knowledge.
The difference between GPT and certain SO users lies in their intent and behavior. GPT strives to be inclusive and helpful, always ready to educate and engage in a constructive manner. In contrast, some users on SO can be dismissive and unsupportive, creating an unfavorable environment for learners. Addressing this distinction is vital to fostering a more positive and nurturing learning experience for everyone involved.
In my opinion this is what makes SO ineffective and is largely why it’s traffic had dropped even before chat GPT became publicly available.
Edit: I did use GPT to remove vitriol from and shorten my post. I’m trying to be nicer.
I think I see a core issue highlighted in your comment that seems like a common theme in this comment section.
At least from where I’m sitting, SO is not and has never been a place for learning, as in a substitute for novices learning by reading a book or documentation. In my 12-year experience with it, I’ve always seen it as a place for professionals and semi-professionals of various experience and overlap sharing answers typically not found in the manual, which speeds up the pace of investigations and work by filling eachother’s gaps. Not a place where people with plenty of time on their hands and/or knack for teaching go to teach novices. Of course there are those people there too but that’s been rare occurrence in my experience. And so if a person expects to get a nice lesson instead of a terse answer from someone with 5 minutes or less, those expectations will be perpetually broken. For me that terse answer is enough more often than not and its accuracy is infinitely more important than the attitude used to say it.
I expect a terse answer. I also am a professional. My experience with SO users is that they do not behave professionally. There’s not much more to it.
I don’t want to compare the behavior, only the quality of the answers. An unintentional error of ChatGPT is still an error, even when it’s delivered with a smile. I absolutely agree that the behavior of some SO users is detrimental and pushes people away.
I can also see ChatGPT (or whatever) as a solution to that - both as moderator and as source of solutions. If it knows the solution it can answer immediately (plus reference where it got it from), if it doesn’t know the solution it could moderate the human answers (plus learn from them).
That’s fair. You don’t have to compare the behavior. There’s plenty of that in the thread already.