1 / 25

Wumpus World Code

Wumpus World Code. There is extra information stored to facilitate modification. main.pro. % Write a string and a list, followed by a newline. format(S,L):- write(S), write(L), nl. % GAME begins here. play :- initialize_general, format("the game is begun.",[]), description_total,

clover
Télécharger la présentation

Wumpus World Code

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wumpus World Code There is extra information stored to facilitate modification

  2. main.pro % Write a string and a list, followed by a newline. format(S,L):- write(S), write(L), nl. % GAME begins here. play :- initialize_general, format("the game is begun.",[]), description_total, retractall(is_situation(_,_,_,_,_)), time(T),agent_location(L),agent_orientation(O), assert(is_situation(T,L,O,[],i_know_nothing)), write("I'm conquering the World Ah!Ah!..."),nl, step. Fairly crude IO Clear the data base Clear the data base

  3. Ask knowledge base what it should do AND do it. step :- % One move for healthy agent who is in cave agent_healthy, agent_in_cave, is_nb_visited,% count the number of rooms visited agent_location(L), retractall(is_visited(L)), assert(is_visited(L)), description,% display a short summary of my state make_percept_sentence(Percept),% I perceive... format("I feel ",[Percept]), tell_KB(Percept),% I learn...(infer) ask_KB(Action), format("I'm doing : ",[Action]), apply(Action),% I do... short_goal(SG),% the goal of my current action time(T),% Time update New_T is T+1, retractall(time(_)), assert(time(New_T)), agent_orientation(O), assert(is_situation(New_T,L,O,Percept,SG)), % we keep in memory to check : % time, agent_location, agent_Orientation,perception, short_goal. step, !. Looks up what you are trying to do

  4. % Final move if dead or out of the cave % NOTE: we get here if the first version of step cannot be satisfied. step :- format("the game is finished.~n",[]), % either dead or out the cave agent_score(S), time(T), New_S is S - T, retractall(agent_score(_)), assert(agent_score(New_S)), description_total, the_end(MARK), display(MARK). Note how we update global variables. Prints summary

  5. decl.pro initialize_land(map0):- retractall(land_extent(_)), retractall(wumpus_location(_)), retractall(wumpus_healthy), retractall(gold_location(_)), retractall(pit_location(_)), assert(land_extent(5)), assert(wumpus_location([3,2])), assert(wumpus_healthy), assert(gold_location([2,3])), assert(pit_location([3,3])), assert(pit_location([4,4])), assert(pit_location([3,1])).

  6. initialize agent initialize_agent(agent0):- retractall(agent_location(_)), … assert(agent_location([1,1])), assert(agent_orientation(0)), assert(agent_healthy), assert(agent_arrows(1)), assert(agent_goal(find_out)), assert(agent_score(0)), assert(agent_in_cave).

  7. other initialization initialize_general :- initialize_land(map0),% NOTE: Which map you wish initialize_agent(agent0), retractall(time(_)), assert(time(0)), retractall(nb_visited(_)), assert(nb_visited(0)), retractall(score_agent_dead(_)), assert(score_agent_dead(10000)), retractall(score_climb_with_gold(_)), assert(score_climb_with_gold(1000)), retractall(score_grab(_)), assert(score_grab(0)), retractall(score_wumpus_dead(_)), assert(score_wumpus_dead(0)), retractall(is_situation(_,_,_,_,_)), retractall(short_goal(_)).

  8. perc.pro make_percept_sentence([Stench,Breeze,Glitter,Bump,Scream]) :- stenchy(Stench), breezy(Breeze), glittering(Glitter), bumped(Bump), heardscream(Scream). stenchy(yes) :- wumpus_location(L1), agent_location(L2), adjacent(L1,L2), !. stenchy(no). The system knows the wumpus location. It had BETTER be adjacent to where you are, or you will fail. KB check.

  9. Similar KB check for glittering glittering(yes) :- agent_location(L), gold_location(L), !. glittering(no). Looking up agent_location

  10. disp.pro (display) • Displays state info agent_healthy_state(perfect_health) :- agent_healthy, agent_courage, !. agent_healthy_state(a_little_tired_but_alive) :- agent_healthy, !. agent_healthy_state(dead). Performs a series of actions

  11. the_end('=) Pfftt too easy') :- no(agent_in_cave), agent_hold, no(is_dead), !. Called by: the_end(MARK), display(MARK). display does io Can convert from infix to prefix

  12. More.pro – I’m not sure if want a wall to be a good room – or it just happens % A location is estimated thanks to ... good, medium, risky, deadly. good(L) :- % a wall can be a good room !!! is_wumpus(no,L), is_pit(no,L), no(is_visited(L)). medium(L) :- % Obviously if is_visited(L) -> is_visited(L). % is_wumpus(no,L) and is_pit(no,L) risky(L) :- no(deadly(L)). deadly(L) :- is_wumpus(yes,L), is_pit(yes,L), no(is_visited(L)).

  13. agent_courage seems to reflect on time spent searching agent_courage :- % we could compute nb_visited / max_room_to_visit time(T), % time nb_visited(N), % number of visted room land_extent(LE), % size of the land E is LE * LE, % maximum of room to visit NPLUSE is E * 2, less_equal(T,NPLUSE).

  14. Computes next location based on orientation location_toward([X,Y],0,[New_X,Y]) :- New_X is X+1. location_toward([X,Y],90,[X,New_Y]) :- New_Y is Y+1. location_toward([X,Y],180,[New_X,Y]) :- New_X is X-1. location_toward([X,Y],270,[X,New_Y]) :- New_Y is Y-1

  15. Note the use of !, fail to prohibit trying other choices. no(P) :- P, !, fail. no(P)

  16. tell.pro – updates KB tell_KB([Stench,Breeze,Glitter,yes,Scream]) :- add_wall_KB(yes),!. tell_KB([Stench,Breeze,Glitter,Bump,Scream]) :- % agent_location(L), % update only if unknown could be great % no(is_visited(L)), % but the wumpus dead changes : percept add_wumpus_KB(Stench), add_pit_KB(Breeze), add_gold_KB(Glitter), add_scream_KB(Scream). Allows options of having walls other than at borders

  17. There was no stench add_wumpus_KB(no) :- agent_location(L1), assume_wumpus(no,L1), % I'm not in a wumpus place location_toward(L1,0,L2), % I'm sure there are no Wumpus in assume_wumpus(no,L2), % each adjacent room. >=P location_toward(L1,90,L3), assume_wumpus(no,L3), location_toward(L1,180,L4), assume_wumpus(no,L4), location_toward(L1,270,L5), assume_wumpus(no,L5), !.

  18. There was a stench add_wumpus_KB(yes) :- agent_location(L1), % I don't know if I'm in a wumpus place location_toward(L1,0,L2),% And It's possible there is a Wumpus in assume_wumpus(yes,L2), % each adjacent room. <=| location_toward(L1,90,L3), assume_wumpus(yes,L3), location_toward(L1,180,L4), assume_wumpus(yes,L4), location_toward(L1,270,L5), assume_wumpus(yes,L5). But you came from one direction, so how does knowing no wumpus compare with possible wumpus?

  19. % Don't allow a "possible" wumpus to override a "no wumpus" • assume_wumpus(yes,L) :- % before I knew there is no Wumpus, • is_wumpus(no,L), % so Wumpus can't be now ... =) • !. • assume_wumpus(yes,L) :- • wall(L), % Wumpus can't be in a wall • retractall(is_wumpus(_,L)), • assert(is_wumpus(no,L)), • !. • assume_wumpus(yes,L) :- • wumpus_healthy, % so... • retractall(is_wumpus(_,L)), • assert(is_wumpus(yes,L)), • !. • assume_wumpus(yes,L) :- • retractall(is_wumpus(_,L)), • assert(is_wumpus(no,L)). % because Wumpus is dead >=] We think there is a wumpus. We treat is as truth until proven otherwise

  20. exec.pro – carry out result of action apply(grab) :- agent_score(S), % get my current score score_grab(SG), % get value of grabbing New_S is S + SG, retractall(agent_score(S)), assert(agent_score(New_S)), % reset score retractall(gold_location(_)), % no more gold at this place retractall(is_gold(_)), % The gold is with me! assert(agent_hold), % money, money, :P retractall(agent_goal(_)), assert(agent_goal(go_out)), % Now I want to go home format("Give me the money >=}...",[]), • !.

  21. If miss wumpus, update facts… % Wumpus is missed % there are several shoot options. % this one occurs after we know we didn’t hit apply(shoot) :- format("Ouchh, I fail Grrrr >=}...",[]), retractall(agent_arrows(_)), % I can infer some information !!! assert(agent_arrows(0)), agent_location([X,Y]), % I can assume that Wumpus... location_ahead([X,NY]), is_wumpus(yes,[X,WY]), retractall(is_wumpus(yes,[X,WY])), assert(is_wumpus(no,[X,WY])),% ...is not in the supposed room. • !

  22. ask.pro – ask KB for advice ask_KB(Action) :- make_action_query(Strategy,Action). make_action_query(Strategy,Action) :- act(strategy_reflex,Action),!. make_action_query(Strategy,Action) :- act(strategy_find_out,Action),!. make_action_query(Strategy,Action) :- act(strategy_go_out,Action),! act(strategy_reflex,die) :- agent_healthy, wumpus_healthy, agent_location(L), wumpus_location(L), is_short_goal(die_wumpus), !. act(strategy_reflex,die) :- agent_healthy, agent_location(L), pit_location(L), is_short_goal(die_pit), !. Notice our order for actions, first reflex, then find goal, then get out

  23. act(strategy_reflex,shoot) :- % I shoot Wumpus only if I think agent_arrows(1), % that we are in the same X agent_location([X,Y]), % it means I assume Wumpus and me location_ahead([X,NY]), is_wumpus(yes,[X,WY]), % are in the same column dist(NY,WY,R1), % And If I don't give him my back... dist(Y,WY,R2), % <=> If I'm in the good orientation less_equal(R1,R2), % to shoot him... HE!HE! is_short_goal(shoot_forward_in_the_same_X), !.

  24. find_out means find out where the gold is. Thus, you proceed if finding the gold is your goal and you have enough time to do it. Are there any good rooms known? act(strategy_find_out,forward) :- agent_goal(find_out), agent_courage, good(_), % I'm interested by a good room somewhere location_ahead(L), good(L), % the room in front of me. no(is_wall(L)), is_short_goal(find_out_forward_good_good), !. act(strategy_find_out,turnleft) :- agent_goal(find_out), agent_courage, good(_), % I'm interested,... agent_orientation(O), Planned_O is (O+90) mod 360, agent_location(L), location_toward(L,Planned_O,Planned_L), good(Planned_L), % directly by my left side. no(is_wall(Planned_L)), is_short_goal(find_out_turnleft_good_good), !

  25. more.pro shows what is_short_goal is // Here you set a short goal //The short_goal is stored so planning steps // know what the next goal is. // It doesn’t appear it is used at // this point. is_short_goal(X) :- retractall(short_goal(_)), assert(short_goal(X)).

More Related