This topic is a constant topic of discussion after every testing hackathon. Each stakeholder of a hackathon has different views, opinions, and expectations from a testing hackathon. Moreover, it is almost impossible to satisfy everyone’s expectations. In this article, we present our views from the perspectives of participants, organizers, and app companies. The list is by no means exclusive, so we would appreciate your assistance in expanding it. Please help us by adding your viewpoints, thoughts, and suggestions.😊
Participants: This category has mixed views on this topic.
Views in Support:
- Based on data that comes from the other contestants, participants can change/optimize/update their strategy.
- Participants can estimate their chances of winning and roughly calculate when to stop.
- It is possible to improve test strategies by learning from others’ bug reports.
- First Timers/Newbies can get an idea of various testing aspects in a Hackathon.
- The number of duplicate issues will reduced in a significant manner.
- By analyzing the types of bugs that others have logged, participants can be better prepared next time with different test techniques, ideas, tools, strategies, and so on.
- The participants can get in touch with other participants who demonstrated some impressive testing skills. This can trigger a whole new learning experience and growth for everyone.
- Knowing the commonly raised bugs can give an idea of key areas and issues to look for in future hackathons.
- There are some participants who don’t want to keep it public because it can bring them unwanted attention in both positive and negative ways. Remember, a winner also makes mistakes in the bigger game. It’s just that they have most likely done fewer mistakes than the other contestants.
- A few folks have mentioned not wanting to disclose which app/feature they are testing. This could again attract unwanted attention.
- Fear of getting judged based on the reports and live work.
- Over-consciousness for some people about the quality of work that they are doing.
- Only the product team should be notified of security or business-sensitive bugs.
- Some participants don’t want to know how many bugs the other testers reported. It could affect morale if someone else logged a bug that was obvious and was missed by them.
- When you are a beginner and don’t know about specialized testing types, seeing bugs in different categories such as security testing, UX/UI, and usability testing can be overwhelming.
- Maintainability and Infrastructure are major challenges.
- App companies want open feedback from participants. Providing a live bug tracking system might influence where people look for bugs.
- Quality is value to someone who matters. The same thing might mean different things to different people. A stakeholder’s idea of high-value issues may be different from a participant’s idea of high-value bugs. This creates a conflict and a situation of confrontation. Ultimately, it must be a stakeholder’s decision since they know what’s valuable.
- The amount of participation could drop if new members/beginners see skimpier chances of winning.
- It is unlikely that most companies would want to make their raw bug data, risks, issues, enhancement suggestions, etc. public. This is a crucial and confidential data point for most organizations.
- Founders and companies know what’s important to them. For most companies, the quantity of bugs is not even a winning criterion.
- They would surely not want to get judged based on the number of bugs raised for their product. As number sometimes gives a wrong picture.
- Some participants may raise disputes concerning duplicates, which bug was raised before/after theirs, or another bug with a different severity (than theirs) was accepted. There will be an added headache for the app companies to settle those disputes.
Do share your views, thoughts, and suggestions in the comments.
Sowmya Sridharamurthy shared a suggestion that can make the feedback loop better. Here are her thoughts:
Statistical metrics can be shared during & post the event concludes.
1. Live counter of the number of incoming bugs
2. Straining of bugs (new defects, duplicates, not bugs) counters
3. Post mortem statistics
A. Which parts of the application were touched more
B. Which parts of the application received how many bugs
C. Statistics about types of issues- functional, non-functional, accessibility, security, etc.