Skip to content

Troilk/VacuumCleaner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This project is an environment for experimenting with toy AI problem - Vacuum cleaner world, originally described in book Artificial Intelligence: A Modern Approach. It's actually a C# adaptation of this C++ project with more user friendly interface.

Video of environment usage Problem description

In the simple world, the vacuum cleaner agent has a bump sensor and a dirt sensor so that it knows if it hit a wall and whether the current tile is dirty. It can go left, right, up and down, clean dirt, and idle. A performance measure is to maximize the number of clean rooms over a certain period and to minimize energy consumption. The geography of the environment is unknown. At each time step, each room has a certain chance of increasing 1 unit of dirt.

  • Prior knowledge

  1. The environment is a square, surrounded with walls.
  2. Each cell is either a wall or a room.
  3. The walls are always clean.
  4. The agent cannot pass through the wall.
  5. The agent can go north, south, east, and west. Each move costs 1 point of energy.
  6. The agent can clear dirt, each time decreasing 1 unit of dirt. Each cleaning costs 2 point of energy.
  7. The agent can stay idle, costing no energy.
  • Performance measure

Given a period T, the goal is

  1. Minimize the sum of amount of dirt in all rooms over T.
  2. Minimize the consumed energy.

Agents

The project contains 3 default agents:

  • RandomAgent - performs random actions on each iteration.
  • ModelAgent. Agent works in 2 stages:
  1. Map discovery. (2n - 1)x(2n - 1) map is created, where n is width/height of real map and it is assumed that agent is at the center of this map. Agent chooses among neighboring tiles cell which he has not visited yet (call it «black») and moves to it. If current tile has no uninvestigated neighbors (it's «white»), agent searches for shortest path to nearest «grey»(visited tile but with uninvestigated neighbors) tile, using A* algorithm. Manhattan distance to nearest «grey» tile from current tile is used as a heuristic for algorithm. If map has no «grey» tiles left, then all accessible tiles are investigated and first stage of algorithm is finished. All uninvestigated tiles are marked as walls to prevent problems of algorithm on the 2 stage. Left top tile coordinates are determined and map is trimmed to smaller (n * n) map.

  2. Regular map traverse. Simple greedy algorithm is used. Agent moves to neighboring tile which was not visited for longest time. It may decide to idle using approximation of dirt respawn time. To approximate time of dirt respawn agent sums all time intervals between dirt clearing and divides it by count of dirt collections.

  • ModelAgentNoIdle - behaves as the previous one but is not trying to predict when to idle.

Renderers

For displaying map and agent 2 default renderers are available:

  • 2D renderer - renders classic 2D tile map
  • 3D renderer - renders 3D, textured tile map

System requirements

Windows
XNA 4.0 Refresh
Microsoft Visual Studio 2010, 2012, 2013

Used libraries/assets

XNA 4.0 Refresh
Neoforce Controls (License)

Some textures from CG Textures

Default agents work plots

Energy map_1

Dirt map_1

Energy map_2

Dirt map_2

Energy map_3

Dirt map_3

Energy map_4

Dirt map_4

About

Vacuum Cleaner AI toy problem environment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages