Information Visualization (InfoVis) is now an accepted and growing field with numerous visualization components used in many applications. However, questions about the potential uses and maturity of novel visualizations remain. Usability studies and controlled experiments are helpful but generalization is difficult. We believe that the systematic development of benchmarks will facilitate the comparison of techniques and help identify their strengths under different conditions. A benchmark typically consists of a dataset, a list of tasks, and a list of non-trivial discoveries. We were each involved in the organization of three information visualization contests for the 2003, 2004 and 2005 IEEE Information Visualization Symposia. Our goal is to encourage the development of benchmarks, push the forefront of the InfoVis field by making difficult problems available, create a forum for the discussion of evaluation and provide an interesting event at the InfoVis conference. The materials produced by the contests are archived in the Information Visualization Benchmark Repository. We review the state of the art and challenges of evaluation in InfoVis, describe the three contests, summarize their results, discuss outcomes and lessons learned, and conjecture the future of visualization contests.