G2 Crowd is a four-year-old startup based in Chicago, offering business software reviews. G2 Crowd is growing rapidly and currently has approximately 150,000 user reviews and nearly 800,000 user visits per month.
Buyers can read software reviews from their peers to find the right software for their business. Reviewers can also leave reviews about business software they already own or answer other users’ software questions. The third user group is software vendors, who users can contact directly from the G2 Crowd website to ask software questions. Our team's goal was to identify usability issues with g2crowd.com.
Mapping Key User Journeys
We conducted interviews with client to understand the project requirement and with target users to understand their needs and frustrations. Subsequently, we mapped the major interactions or user journeys on the website. This enabled us to understand g2crowd.com in detail and therefore perform a comprehensive analysis of competitors.
We sent out a pilot survey to 50 users that our client linked us to and then a full fledged survey to respondents using Qualtrics. The survey gave us many useful insights into the needs of our target users and the usability issues they face.
To appropriately scope our comparative evaluation, we established competitor
categories. (Prof. Newman's Taxonomy)
1. Direct: Offers crowd-sourced user reviews and
ratings for business software. (Offering the same functions in the same way
2. Indirect: Offers expert (i.e., not crowd-sourced)
research, reviews, and ratings for
business software. (A competitor that covers some but not all functions)
3. Partial: Offers expert (i.e., not crowd-sourced)
reviews and ratings for technology-related
products.(Offering the same functions in a different way (i.e., through a different medium)
4.Parallel: Offers suggestions for alternate and new
software products. (Same kind of service/function to similar audience via similar channel)
5. Analogous Offers crowd-sourced user reviews and
ratings for products and services in
general (Not the same type of service, but a non-competitor that might give ideas about how to provide functions better
After conducting individual evaluations of the site, we met as a team to aggregate
our findings. Each team member presented his or her findings to the team, while
another member recorded the finding, along with the associated heuristic(s), in a
spreadsheet. After all the findings were aggregated, team members individually assigned
severity ratings to each finding using Nielsen’s “Severity Ratings for Usability
Problems (Nielsen, 1995b). We calculated an average of team members’ severity
ratings for each finding, and selected findings with the highest average severity
ratings as our key findings.
After a heuristic evaluation using Nielsen's 10 heuristics, we finally we peformed 8 usability tests in both lab and remote conditions.
Some findings in brief were:
1. Compare function was difficult and ridden with minor usability issues.
2. Information overload on product comparison page.
3. Navigation needed both structural and sematic improvement.
4. Data visualizations were uneccesarily complex.
5. Poor affordance of clickable buttons.
6. A need for a saving product comparisons for future.
- Help users interpret data by using simple, well-labeled graphs.
- Use visual design principles to make these content easy to find, consume and share.
- Restructure the compare function.
- Interactive UI elements should be visually designed to support actions users intuitively expect from them.(Affordance)
- Improve navigation by adding standard UI elements like breadcrumbs on all pages.
We found there was a need to help users interpret data by using simple, well-labeled graphs.
Simple tables and bar graphs offer easy-to-perform comparisons. Most users would
agree that comparing along a line is easier than comparing across an arc, which is an
area on a radar/spider chart. The following list outlines the specific issues with the radar map:
1. It is difficult to reach this graph as it is hidden under a “read more” link under the
reviews. After clicking “read more”, a user expects to find a detailed review and
not this animated graph.
2. The grid area contains a substantial amount of non-data ink.
3. The visual organization of the page makes the plot of the graph difficult to
understand. For instance, the font is small and the legend and large multi colored
social media icons are equidistant from the graph.
4. Users tend to assume the axes are related, i.e. the axes are pushing/pulling each
other, which may not be true.
5. There is occlusion in some cases especially if more than two series are plotted.
6. Labels are far away from the center.
7. The area of the shapes increases as a square of the values rather than linearly. A
small variation in values may result in a disproportionate difference in area.
The product comparison graphs are difficult to access since they are hidden
under a “see more” link in a product comparison page.
1. Chat windows occlude certain areas of the graph.
2. What appears to be a legend is actually a set of clickable filters.
3. The labels are tiny and not legible.
4. It’s difficult to visually distinguish the axis title from the axis variable.
5. The distance between labels and key areas of the graph needs to be reduced.
6. Social media icons near the graph axis obstruct clear and preattentive
The G2 crowd grid as seen on a
category page is helpful, but has many usability issues that lead to inefficiency and
1. The “Market Presence” axis could be on the left side.
2. All axes should have a
differently colored line with an arrow and a differently styled typeface.
3. Some users were puzzled by the choice of word “niche” since a product with a
very low satisfaction and market presence would be in the “niche” quadrant even
though the word niche means specialty and is not related to dissatisfaction.
4. Some users, including a Master of Business student, were not able to identify at
first glance the company that was the leader in the Grid. We recommend adding
the prefix “Q1” before the leader quadrant to clearly differentiate that “leader” is a
quadrant not an axis.
5. A company icon in the grid upon being clicked should allow a “add to compare”
6. The set of colorful and big social media icons distract and devour attention
unnecessarily and should be removed from its current location near the x-axis.
7. Some subjects reported that the meaning of the G2 Score could only be see on
mouse-down and not mouse-hover.
According to a user survey previously conducted by our team, product comparison is one
of the most used features on G2 Crowd’s website. However, our test subjects
consistently struggled when trying to use the product comparison feature. Part of the
problem is that users are confused by the changing label of the “Add to Compare” button
under each product.
Once users have successfully selected products for comparison, they are presented with
a series of green and grey bar graphs that represent ratings for specific aspects of the
products. Green bars represent the “winner” for a category; grey bars
represent the “losers.” Almost none of our test subjects understood the meaning behind
these colors. Some even wondered if the grey color meant the bar was “disabled” or the
data was “unverified.”
Findings: Visual Design
The following recommendations would increase the legibility of the text:
1. Darken the text color.
2. Increase the typeface size.
3. Increase the space between lines of text, also called leading.