Projective simulation is a model for intelligent agents with a deliberation capacity that is based on episodic memory. The model has been shown to provide a flexible framework for constructing reinforcement-learning agents, and it allows for quantum mechanical generalization, which leads to a speedup in deliberation time. Projective simulation agents have been applied successfully in the context of complex skill learning in robotics and the design of the state-of-the-art quantum experiments. In this paper, we study the performance of projective simulation in two benchmarking problems in navigation, namely, the grid world and the mountain car problem. The performance of projective simulation is compared to standard tabular reinforcement learning approaches, Q-learning, and SARSA. Our comparison demonstrates that the performance of projective simulation and standard learning approaches are qualitatively and quantitatively similar, while it is much easier to choose optimal model parameters in the case of projective simulation with reduced computational effort of one to two orders of magnitude. Our results show that the projective simulation model stands out for its simplicity in terms of the number of model parameters, which makes it simple to set up the learning agent in unknown task environments.