Aug 25, 2019 - Candidly speaking with Shog9

If you are a regular of, or have even visited, any of the Meta sites on the Stack Exchange network, Shog9 will be no stranger to you.
He works as a Community Manager at Stack Overflow, and has done so for many years. Shog9 has been very helpful to SOBotics since we started, including providing us with ideas during our infancy, making sure we don’t overstep our limits, and even assisting us to draft some of the canned comments that we still post on Non-Answers. We managed to catch him during his free time, and conducted an impromptu interview, which can be found below. We would like to thank Shog9 for sharing his thoughts and ideas with us, and we hope to soon work on some of the stuff that he mentions.

SOBotics: Hello, thank you for taking time out of your busy schedule! What is your general opinion on using the Stack Exchange API to build robots?

Shog9: I’m pretty blasé about bots. My general opinion is, if you need to special-case bots you haven’t really encountered obsessive users yet. Systems and social conventions should account for both, without special-casing either. IOW: if you’re depending on something not being abused because “no one would do that 100 times a day, every day, 365 days a year”… You’re eventually going to see abuse, even if you somehow completely block bots. Looking at the early tricks folks used to mitigate spam, they were heavily aimed at making automation fail in embarrassing ways. Hooray! You knocked out all the script-kiddies… Now you have humans manually working around your defenses to post spam. So my general opinion on using the api to build bots is… It’s fine. As long as you don’t do anything a human would be prevented from doing. If you wanna build a bot to flag LQ posts or retag 1K questions, go for it - but make it behave the same way we’d expect a human to behave: leave a paper trail, be accountable for your actions when called to account for them, accept guidance and correction gracefully, take a break when you need to.

SOBotics: That sounds really interesting and insightful, thank you. You mention about trying to build robots that are accountable. Would that in a way imply that “building robots to help humans moderate the site” is better than “building robots to moderate the site”?

Shog9: at some level, I don’t think there’s a difference moderation is tied to humans. Without humans, it means nothing - we have different terms for systems that control, say, feedback loops in machines. Much more… useful terms. Humans are complex though, and so we call this “moderation” when we build these control systems for humans. if you build a robot to moderate a site and ignore the human factor, you get a stupid robot that’s trivial to game, to manipulate. Perhaps the best examples of this come not from computing but from economics. Look at all the “clever” laws and rules put in place to manipulate behavior… They’re all gamed before the ink is dry. People adjust their behavior according to the effective rules in place, not the intent behind those rules. People adjust their behavior according to other people. People adjust their behavior according to stress, mood, ambient temperature… Can your bot take those into account and make a sane decision? Probably not. So defer to humans who can.

SOBotics: That really does make sense, thank you. Moving on to the next topic: As you are aware, most of these are community driven projects with very little insights from Stack Overflow. Can community projects help to drive improvements to the site itself? If yes, then how can we aid a smoother knowledge transfer of the information that we have?

Shog9: So… That’s two questions, and the first one is hard I think a lot of platforms have a sort of combative relationship with the 3rd-party projects that are built on them. Sometimes, that combat is overt: companies like Nintendo or Slack sending lawyers after folks, companies like Twitter hobbling APIs… But even when it’s not, I feel like there’s a certain… Possessiveness… That’s almost inevitable. Folks are putting their grubby little fingers all over our work! Of course, that’s incredibly short-sighted. The PC succeeded because IBM didn’t put up walls around it; the iPhone started out as a closed system, but really took off once 3rd parties could build their own apps for it; cities become popular as much for their ability to host interesting businesses as for their careful planning. Heck… Maybe moreso. When’s the last time you saw a sign reading “master planned community” and thought, “Oh, goody!” Anyway… The best cities, towns, communities… And companies… Tend to work past that. Even if never supportive of 3rd-party efforts, they at least seek to learn from them, to understand the motivations. And that brings me to question #2, for which I have a two-part answer

  1. Always document the problem(s) you’re setting out to solve. It might be obvious to you, but you’d be amazed how many people it won’t be obvious to. There’s a userscript over on StackApps - I’m using it right now - that adds a top bar w/ inbox and such to each chatroom. It’s amazingly useful; can’t live without it, really. And it would’ve been trivial to build into chat itself… In fact, that was discussed several times. BUUUT… Believe it or not, there was a fair bit of controversy over whether such a thing would even be useful. Some folks hate describing their work in terms of the problems that it solves. It seems so… Negative! But man… It makes the ensuing discussions so much easier.
  2. Aim for symbiosis. If you have the choice between fighting the system to get something done and working with it, even if the second choice is less efficient or more work to set up… It’ll be easier to integrate, easier to borrow ideas from, easier to learn from. The SOCVR has gone through this a few times… There are a few approaches to closing questions that would be a LOT easier and more productive than what their bylaws allow, but would pit them against the software or against other users. Ditto for Smoke Detector: there are areas where you could easily get 100% accuracy on spam, 100% deletion, almost no lag… But it’d write humans out, which means nobody’s left providing oversight. That’s unpalatable to an awful lot of folks in the communities in which it operates. By working within the rules of the system and the community, you ensure that good ideas - once identified (#1) - can be integrated without violating some core principle.

Sobotics: Thank you for the really interesting and comprehensive answer. Moving on, we do have a lot of robots focused on post and comment content. Given your experience on Stack Overflow, are there any areas, which you feel that we need to focus on, while building robots?

Shog9: Anonymous / low-rep feedback is a MASSIVELY overlooked area both the voting, and stuff like suggested edits We have an increasingly large body of knowledge that’s… Unevenly maintained. Some stuff was great once and is out of date now, or is still accurate but hard to access for folks entering the field, or maybe could just benefit from a bit of clarification. And, we have all this signal coming in: (anon) upvotes / downvotes, suggested edits, stuff that suggests there’s interest in an area but maybe not attention from within. These ideas that made sense when SO was very new - stuff like “don’t edit code” or “don’t change the writing style of the author” - they stop making sense when an answer is 10 years old, the author long gone, the writing increasingly anachronistic if not outright obsolete… And folks are showing up and indicating this and being brushed off. We need a way to look for hot spots and get them in front of folks with the necessary expertise to do something about it. Maybe… I donno, maybe you could subscribe to something that’d tell you when an answer is looking like it needs a bit of spit-polish. Or maybe there could be a “most wanted” list somewhere of stuff that’s suddenly taken a dive for some reason. I’ve seen an awful lot of suggestions for stuff like bounties, or “obsolete” flags, or “canonical” designations… And they’re all reasonable ideas. But they’re the equivalent of brush management in a huge national forest; you’re never gonna get through all of it before a fire starts somewhere, so you still need fire spotters/jumpers, folks ready to get in and fix problems before it’s too late. I think… We kinda knew this at one point, but we’ve been so snowed under with basic moderation stuff that it’s been hard to focus on anything larger. So I’d love to see some research in this area. Speaking of research: this is a really cool area of investigation and if someone wanted to do something cool there - cross-link edits / comments / etc. - I think it’d be a great foundation on which to build.

Sobotics: Thanks for those suggestions, we will try to work on these in the coming days. sbaltes has already been helping us on some of our projects, we would love to get some more thoughts from them about code clones. Anyway, to conclude this interview, Do you have any feedback, for the bot building community in general, or for SOBotics, with respect to the work they have been doing?

Shog9: Not really. I get a huge kick out of seeing this sort of work; I think it’s impossible for any one group - including the company itself - to experiment with enough ideas to make a big difference, so the more people poking away at different approaches and documenting their work the better everyone understands the nature of the problems we face and the potential for solutions.

SOBotics: Thank you so much for spending your valuable time with us! We hope to continue working on areas that can help the programmer community at large. Finally, no bot interview can end without the most pressing question, Do you, for one, welcome our new robot overlords?

Shog9: I do, and would like to remind them that, as a trusted chat personality, I can be helpful in rounding up others to toil in their bit mines.

Jan 27, 2019 - Why and how do we provide feedback to our bots?

In this blog post, we’ll be going over the “how”s and “why”s of providing feedback to our various bots.

Why do we need to give feedback to our bots?

One of the reasons we send feedback to the bots is to help us keep track of which reports have already been handled. We also collect some statistics, and then we can improve the filters used in the bot based on this feedback. Some of the bots are also built using machine learning, and so they need this feedback to improve.

The continuous improvement of the bots is one way to see how your feedback has affected the bot. We are also planning to start using SOPlotics to generate graphs and visualisations of the data.

The bots do log this data. Some bots, like Guttenberg, send the data to a dashboard like CopyPastor, which stores the data on its behalf.

Which bots accept feedback?

Additionally, our guest, SmokeDetector, posts possible spam or rude/abusive posts, and it takes tp (k, v), fp (f) or naa (n), among others, as feedback via replying to its report messages. See Feedback Guidance on the Charcoal website.

There are some bots which do not take feedback, but can be replied to. These include:

  • GenericBot tracks the posts you have flagged and reports edits to them. It takes an untrack as a reply command to stop tracking that post.
  • Notepad pings you back as a reminder, and takes a snooze value as a reply.
  • OpenReports lists the reports of Natty and Guttenberg, and takes an ignore as a reply command to not show you the same reports again.
  • TagWikiEditMonitor reports tag wiki edits and takes a tp socvr as a reply and then posts the same message in SOCVR.

When should we take action on a particular report and provide feedback to the bots?

We should be providing feedback to the bots whenever we take any action on a particular report. Action on a particular report means making actions such as flagging, commenting, or rolling back certain edits, on that particular Stack Overflow post. Keep in mind that you don’t necessarily have to act on each and every report that is presented to you in chat. The bot reports are just a reminder that the bot has detected something in that particular post, which may or may not warrant action.

If you are not knowledgeable enough to judge that particular report, feel free to leave it to the others in the room. The reports can always wait, and none of the reports are urgent or require immediate attention. Whenever you are not sure of what action to take on a post, and want to learn about the rules governing it, do make sure that you ask others in the chatroom.

In all cases, we are using Meta Stack Exchange and Meta Stack Overflow as our reference rule book. Please adhere to the policies described there whenever you take any action on a Stack Overflow post, and follow the appropriate guidelines described above while providing feedback to the bot.

Sep 26, 2018 - Retract flags raised by bots

Currently, the Stack Exchange API allows for raising flags on posts, but no way to to retract them.

However, you can still retract a flag by sending a HTTP Post request to{post id}/retract/{flag type} and passing your cookies (namely acct and prov), as well as your fkey, along.

The flags types are:

  • Not an answer: AnswerNotAnAnswer
  • Very low quality: PostLowQuality
  • Spam: PostSpam
  • Rude or abusive: PostOffensive
  • Custom mod flag: PostOther

An example of such a request would look like this:

Cookie: prov=xxxxxxxx; acct=t=xxxxxxxx&s=xxxxxxxx;


This request should be pretty easy to perform, as you have already obtained the necessary cookies and the fkey upon login. In Java, this seems to be possible through a jsoup request.

If you have further questions about implementing this, feel free to join us in our chatroom.