Regulating Online Behaviour on Instagram
Some common themes and benefits associated with web 2.0 platforms include; user involvement, online collaboration, participatory cultures and two-way mutually beneficial relationships (particularly from an organisational or marketing perspective anyway). As a result of this, online users feel that they not only have the right, but are encouraged to comment, tweet, hashtag and blog their personal contributions and thoughts and to share them across the web.
However, just like in the real world, such as at university or in the workplace, online communities have normative or accepted behaviours for their users, which differ greatly from platform to platform. For example, Rotten Tomatoes expects for reviewers to leave honest reviews, which quite often may highlight negative perspectives, such as a poorly developed plot line or character in a movie. Alternatively, the normative behaviour for a Health Support Community, like body image forum such as Women’s Forum Australia, have expectations pertaining to comments to be encouraging and supportive.
In saying that, there will always be a percentage of people who will behave in a way that is unacceptable, or that pushes the boundaries for that particular website or platform, in order to behave however they please. Quite often these ‘bad actors’ are commonly referred to as ‘trolls’, ‘manipulators’, ‘spammers’ and even ‘flamers’. And therefore, in order for online contributions to be useful and beneficial for harnessing the collective intelligence, as well as for crowd sourcing purposes and ultimately ensuring the web 2.0 platform is still being enjoyed, behaviours must be regulated.
Behaviour Regulation and Instagram
For those of you who aren’t familiar with Instagram, it is a popular photo sharing social media platform, where users can upload, edit and share images, with their ‘followers’ and use hashtags to ‘categorise’ them, the same as Twitter. Instagram has only been around since 2010 and has grown vastly, with reportedly an average of 60 million photos being uploaded each day. I use instagram on a daily basis, for personal and work-related use, and throughout my use have seen spam-like and inappropriate behaviour, which prompted my research about how Instagram really is regulating behaviour on this ever-expending platform.
I believe Instagram is implementing two areas from the design claims relating to good regulation of behaviours in online communities; including limiting the effects of bad behaviours and coercing compliance by limiting behaviours. There are obvious functions and capabilities for users on Instagram to report inappropriate and spam behaviour, such as being able to directly report an image or comment, which is then moderated by Instagram against their policies of what is and is not allowed. Instagram sends warnings to some users who have behaved in a poor manner and can suspend an Instagrammer’s account for continued breaches or a serious misuse of the service if the breach is deemed serious enough. However, these practices have opened Instagram up to criticisms from certain users saying that these consequences just encourage offenders to pursue this behaviour further, particularly if they feel their account was unfairly or wrongly deactivated.
A feature that I think Instagram was right in implementing and that is suitable to limit bad behaviours and coerce compliance is their usage limits on certain behaviours. Instagram has activity quotas, just like on Twitter, to prevent spam-like activities such as following, liking or tagging too many people and their images, particularly over specific time periods.
One recommendation that I think Instagram could consider to encourage voluntary compliance and reduce the number of offenses from their users, would be to offer reminders at the point of action when users may violate the normative behaviour. For example, a warning message could appear to a user when they are commenting on a person’s photo that they do not follow or if they have commented more than 2-3 times in the space of a few hours, similar to Ebay’s negative feedback warnings.
I am interested to know what you think. How do you think Instagram may change their terms and actions for regulating behaviours in the future? Do you think what they are doing is working or is there something else they could be doing? I look forward to reading your comments and suggestions!
- Clough, E. (2014). Comment: policing bad behaviour online shouldn’t mean dobbing. SBS News. Retrieved from http://www.sbs.com.au
- Foner, J. (2011). We are the network: handling bad behavior online- strategies and implications[Web log post]. Retrieved from http://joelfoner.com/2011/01/httpjoelfoner-com2011012011-01-11-we-are-the-network-handling-bad-behavior-online-strategies-and-implications/
- Instagram. (2014). Our story. Retrieved from http://instagram.com/press/
- Kiesler, S., Kraut, R., Resnick, P., & Kittur, A. (2010). Regulating behavior in online communities. Retrieved from http://kraut.hciresearch.org/sites/kraut.hciresearch.org/files/articles/kiesler10-Regulation-current.pdf
- O’Reilly, T. (2005). What is web 2.0. Retrieved from http://oreilly.com/pub/a/web2/archive/what-is-web-20.html?page=2
- Wirtz, J., Den Ambtman, A., Bloemer, J., Horvath, C., & Ramaseshan, B. (2013). Managing brands and customer engagement in online brand communities. Journal of Service Management, 24(3), 223-244. Retrieved from http://www.search.proquest.com
- Image Source: Flickr