David Vallet PhD: Personalized Information Retrieval in Context

April 15, 2018 | Author: Anonymous | Category: Documents
Report this link


Description

P K Perso Co Knowle onalize ontext edge D Submitt at t ed Inf by Ex and I David Jor T Pablo C Departmen Escuela Universidad ted for the D in Com the Universi forma xploit mplic rdi Valle Thesis adviso Castells Azp nt of Compu Politécnica d Autónom Degree of D the subject mputer Scie idad Autóno May 2008 ation R ting S cit Use et Weado or pilicueta uter Science Superior a de Madrid Doctor of Ph t of ence oma de Mad Retrie Seman er Fee on e d hilosophy drid eval in ntic edbac n ck i Abstract Personalizationininformationretrievalaimsatimprovingtheuser’sexperienceby incorporatingtheusersubjectivityintotheretrievalmethodsandmodels.Theexploitationof implicit user interests and preferences has been identified as an important direction to enhance current mainstream retrieval technologies and anticipate future limitations as worldwide content keeps growing, and user expectations keep rising. Without requiring further efforts from users, personalization aims to compensate the limitations of user need representation formalisms (such as the dominant keyword-based or document-based) and help handle the scale of search spaces and answer sets, under which a user query alone is often not enough to provide effective results. However, the general set of user interests that a retrieval system can learn over a period of time, and bring to bear in a specific retrieval session, can be fairly vast, diverse, and to a large extent unrelated to a particular user search in process. Rather than introducing all user preferences en bloc, an optimum search adaptation could be achieved if the personalization system was able to selectonlythosepreferenceswhicharepertinenttotheongoinguseractions.Inotherwords, although personalization alone is a key aspect of modern retrieval systems, it is the application ofcontextawarenessintopersonalizationwhatcanreallyproduceastepforwardinfuture retrieval applications. Contextmodelinghasbeenlongacknowledgedasakeyaspectinawidevarietyofproblem domains, among which Information Retrieval is a prominent one. In this work, we focus on the representationofliveretrievalusercontexts,basedonimplicitfeedbacktechniques.The particular notion of context considered in this thesis is defined as the set of themes under which retrieval user activities occur within a unit of time. Ourproposalofcontextualizedpersonalizationisbasedonthesemanticrelationbetweenthe user profile and the user context. Only those preferences related to the current context should be used,disregardingthosethatareoutofcontext.Theuseofsemantic-drivenrepresentationsof thedomainofdiscourse,asacommon,enrichedrepresentationalgroundforcontentmeaning, user interests, and contextual conditions, is proposed as a key enabler of effective means for a) a richusermodelrepresentation,b)contextacquisitionatruntimeand,mostimportantly,c)the discovery of semantic connections between the context and concepts of user interest, in order to filterthosepreferencesthathavechancestobeintrusivewithinthecurrentcourseofuser activities. i Contents Abstract ...................................................................................................................................... i  Contents ..................................................................................................................................... i  List of Figures ........................................................................................................................... v  List of Tables ........................................................................................................................... vii  1  Introduction ......................................................................................................................... 1  1.1  Motivation ..................................................................................................................... 1  1.2  Personalization in Context ............................................................................................ 4  1.2.1  Semantics and Personalization .............................................................................. 4  1.2.2  User Context Modeling and Exploitation .............................................................. 6  1.3  Contributions ................................................................................................................. 9  1.4  Outline ......................................................................................................................... 10  2  State of the Art .................................................................................................................. 13  2.1  Personalized Information Retrieval ............................................................................. 13  2.1.1  User Profile Representation ................................................................................ 16  2.1.2  User Profile Learning .......................................................................................... 23  2.1.3  User Profile Exploitation ..................................................................................... 29  2.1.4  Personalization in Working Applications ............................................................ 39  2.2  Context Modeling for Information retrieval ................................................................ 41  2.2.1  Concept of Context .............................................................................................. 44  2.2.2  Context Acquisition and Representation ............................................................. 46  2.2.3  Context Exploitation ........................................................................................... 47  3  A Personalized Information Retrieval Model Based on Semantic Knowledge ............ 51  3.1  Ontology-based User Profile Representation .............................................................. 52  3.2  A Semantic Approach for User Profile Exploitation ................................................... 54  3.2.1  Personalized Information Retrieval ..................................................................... 58  4  Personalization in Context ................................................................................................ 61  4.1  Notation ....................................................................................................................... 63  4.2  Preliminaries ............................................................................................................... 64  4.3  Semantic Context for Personalized Content Retrieval ................................................ 66  4.4  Capturing the Context ................................................................................................. 67  4.5  Semantic Extension of Context and Preferences ......................................................... 69  4.5.1  Spreading Activation Algorithm ......................................................................... 70  4.5.2  Comparison to Classic CSA ................................................................................ 75  4.6  Semantic Preference Expansion .................................................................................. 76  4.6.1  Stand-alone Preference Expansion ...................................................................... 76  4.7  Contextual Activation of Preferences .......................................................................... 77  4.8  Contextualization of Preferences ................................................................................. 78  4.9  Long and Short Term Interests in the State of the Art ................................................ 79  4.10  An Example Use Case ................................................................................................. 81  5  Experimental work ............................................................................................................ 87  5.1  Evaluation of Interactive Information Retrieval Systems:An Overview ................... 87  5.1.1  User Centered Evaluation .................................................................................... 88  5.1.2  Data Driven Evaluation ....................................................................................... 91  5.1.3  Evaluation Metrics .............................................................................................. 92  5.1.4  Evaluation Corpus ............................................................................................... 93  5.2  Our Experimental Setup .............................................................................................. 94  5.3  Scenario Based Testing: a Data Driven Evaluation .................................................... 96  5.3.1  Scenario Based Evaluation System ..................................................................... 96  5.3.2  Scenario Based Evaluation Methodology ......................................................... 102  5.3.3  Scenario Based Experimental Results ............................................................... 103  5.4  User Centered Evaluation .......................................................................................... 105  iii 5.4.1  User Centered Evaluation System ..................................................................... 105  5.4.2  User Centered Evaluation Methodology ........................................................... 107  5.4.3  User Centered Experimental Results ................................................................. 111  6  Conclusions and Future Work ....................................................................................... 113  6.1  Summary and Achieved Contributions ..................................................................... 113  6.1.1  Personalization Framework Based on Semantic Knowledge ............................ 113  6.1.2  Personalization in Context ................................................................................ 114  6.1.3  User and Context Awareness Evaluation .......................................................... 115  6.2  Discussion and Future Work ..................................................................................... 116  6.2.1  Context .............................................................................................................. 116  6.2.2  Semantics .......................................................................................................... 119  References ................................................................................................................................ 121  Appendices ............................................................................................................................... 131  A.  Detailed Results for the Scenario Based Experiments .............................................. 132  B.  User Centered Evaluation Task Descriptions ............................................................ 143  v List of Figures Figure 2.1.  Query operation example for a two dimension projection. ............................. 30  Figure 2.2.  Example of user profile based on logic term operators ................................... 32  Figure 2.3.  Typical schema of document weighting on personalized retrieval systems. ... 35  Figure 3.1.  User preferences as concepts in an ontology. .................................................. 54  Figure 3.2.  Links between user preferences and search space. .......................................... 55  Figure 3.3.  Visual representation of metadata and preference's vector similarity ............. 57  Figure 3.4.  Construction of two concept-weighted vectors. .............................................. 58  Figure 4.1.  Simple version of the spreading activation algorithm. .................................... 70  Figure 4.2.  Example of preference expansion with the CSA algorithm ............................ 71  Figure 4.3.  Priority queue variation of the spreading activation algorithm ....................... 72  Figure 4.4.  Parameteroptimizedvariationwithpriorityqueueofthespreading activation algorithm. ............................................................................................................... 74  Figure 4.5.  Semantic intersection between preferences and context ................................. 78  Figure 4.6.  Characterization of concept drift in the contextualization algorithm .............. 79  Figure 4.7.  A subset of domain ontology concepts involved in the use case..................... 82  Figure 4.8.  Visual representation of the preference contextualization. ............................. 85  Figure 5.1.  Example of a simulated situation. ................................................................... 89  Figure 5.2.  Different areas of performance for a precision and recall curve. .................... 96  Figure 5.3.  Main window of the scenario based evaluation system .................................. 97  Figure 5.4.  UI for complex relations creation. ................................................................... 99  Figure 5.5.  Interactive dialog for semantic query generation ............................................ 99  Figure 5.6.  Concept profile editor UI .............................................................................. 100  Figure 5.7.  Examples of document snippets and text highlighting. ................................. 101  Figure 5.8.  Contextual and personalization information dialog ...................................... 102  Figure 5.9.  Example of user centered task description. ................................................... 103  Figure 5.10.  Comparativeperformanceofpersonalizedsearchwithandwithout contextualization. .................................................................................................................. 104  Figure 5.11.  Comparative mean average precision histogram of personalized search with and without contextualization ............................................................................................... 105  Figure 5.12.  User Centered evaluation system UI. ........................................................ 106  Figure 5.13.  User preference edition. ............................................................................ 107  Figure 5.14.  Task description for task 1: News about agreements between companies.108  Figure 5.15.  Relevance assessment UI .......................................................................... 110  Figure 5.16.  Comparativeperformanceofpersonalizedsearchwithandwithout contextualization. .................................................................................................................. 112  vii List of Tables Table 2.1.  Overview of personalized information retrieval systems. ............................... 16  Table 2.2.  Overview of term-based user profile representation systems. ......................... 18  Table 2.3.  Overviewofconcept-baseduserprofilerepresentationinpersonalized retrieval systems. .................................................................................................................... 20  Table 2.4.  Overviewofusagehistorybaseduserprofilerepresentationinpersonalized retrieval systems. .................................................................................................................... 22  Table 2.5.  Overview of explicit feedback learning in personalized systems. ................... 25  Table 2.6.  Overview of implicit feedback learning in personalized systems. .................. 28  Table 2.7.  Overview of hybrid feedback learning in personalized systems. .................... 29  Table 2.8.  Classification of document weighting exploitation in personalized systems. . 36  Table 2.9.  Overview of context-aware retrieval systems ................................................. 44  Table 4.1.  Spreading activation algorithm optimization parameters ................................ 75  Table 4.2.  Example of user Profile: Sue's preferences ..................................................... 81  Table 4.3.  Example of propagation of weights through semantic relations. .................... 82  Table 4.4.  Example of Context vector .............................................................................. 83  Table 4.5.  Example of expanded context vector .............................................................. 84  Table 4.6.  Example of extended user preferences ............................................................ 84  Table 4.7.  Example of contextualized user preferences ................................................... 85  Table 5.1.  Summary of complete evaluation systems ...................................................... 88  Table 5.2.  ResultsonMeanAveragePrecision(MAP)foreachofthethreeevaluated retrieval models. .................................................................................................................... 112  Chapter 1 1 Introduction 1.1Motivation Thesizeandthepaceofgrowthoftheworld-widebodyofavailableinformationindigital format(textandaudiovisual)constituteapermanentchallengeforcontentretrieval technologies.Peoplehaveinstantaccesstounprecedentedinventoriesofmultimediacontent world-wide, readily available from their office, their living room, or the palm of their hand. In suchenvironments,userswouldbehelplesswithouttheassistanceofpowerfulsearchingand browsing tools to find their way through. In environments lacking a strong global organization (such as the open WWW), with decentralized content provision, dynamic networks, etc., query- based and browsing technologies often find their limits. Takeasanexampleauserwhoentersthequery“searchlibrary”intoatypicalWebsearch engine, such as Google, Yahoo! or MSN search. Taking the query alone, we may think the user islookingforanonlineserviceforbooklocationine.g.somelocallibrary,bookstores,or digitallibraries.Buttheintentionofthisquerycouldalsoberelated,forinstance,tofinding computer programming libraries supporting content search and retrieval functionalities. Such an ambiguous query, which by itself alone does not provide enough information to properly grasp theuser’sinformationneed,isanexamplewherepersonalizationcapabilitiesshowtheir usefulness.WhilemainstreamWebsearchenginesreturnthesameresultstoallusers 1 , personalized systems adapt the search results to the users’ interests. In the example, the second interpretation(programminglibrary)mightseemmorelikely,andthefirst(booksearch)abit far-fetched. Interestingly though, testing the example in Google, the results happen to be more relatedtothefirstmeaningofthequery:Websiteslikewordcat(abookandlocallibrary locator) or the Google book search service appear at the top of the ranking. 1 Nowadaystherearesomeincipientexceptions.Forinstance,Googleiscurrentlyapplyingasubtle personalization approach, which, analyzing applied US patents, uses the past usage history of the user in ordertopromoteresultspreviouslyopenedinsimilarpastqueries.Theuser’scountryandlanguageare also used to perform certain simple adaptations. 2Introduction — Chapter 1 Let’s now suppose there are two users with different interests using the Web search engine: one hasaninterestforcomputerprogrammingandtheotherhasaninterestforsciencefiction literature. With this information at hand, it should be possible for a personalized search engine todisambiguatetheoriginalquery“searchlibrary” 2 .Thefirstusershouldreceivee.g.the Lucene 3 and Terrier 4 Java libraries (which support indexing and searching functionalities) in the topresults.Thesecondusershouldreceiveresultsaboute.g.catalogsearchservicesforlocal and on-line libraries specialized in science fiction literature. Now what if a user happens to share these two interests, e.g. a computer programmer who likes sciencefictionliterature?Ifthepersonalizationsystemappliedallthepreferencestogether,it may happen that the results neither fully satisfy one interest nor the other. Results based on both preferencesmayincludeforinstance 5 twoaverage-qualitysciencefictiononlinecatalogs, written in java, and an Amazon.com page about “Java programming” under the “Science Fiction & Fantasy” category [sic]. These results are relevant to all the interests of the user in a too literal way,buttheuserwillhardlyfindtheseresultssubjectivelyinterestinginaparticularrealistic situation.Theproblemhereisthatuserpreferences,takenasawhole,arealsoambiguousfor thequeryathand.Thequestiontheniswhetherandwhereisitpossibletofindfurther information to clarify the actual user’s intent. The solution explored in this thesis is to seek for such cues in the closer context of the current user situation (e.g. the task at hand). Ashypothesizedinthisthesis,contextappliedtopersonalizedretrievalcanbeexploitedto discard interests that are not related to the current context of the user. For instance, if the user is atwork,thepreferenceforcomputerprogrammingismorelikelytoberelevant,whereasthe preferenceforsciencefictionliteraturecouldbemoresafelydiscardedandnotusedinthe personalization process (i.e. this can be expected to be a good decision in most cases, that is, on average). Anotherexampleofcontextsource,whichisexploredinthiswork,istheimplicitfeedback informationfromtheuser,i.e.thecontextualinformationimplicitlyprovidedbyprevious interactionsoftheuserwiththeretrievalsystem,withinthesamesearchsession.Asan 2 Simplifying the personalization process, we can suppose that the personalization system disambiguates thequerybyaddingautomaticallysomeextraterms.Forinstance,itwouldchangethequeryto“java searchlibrary”orto“searchsciencefictionlibrary”foreachuserrespectively.Thisexamplewas elaborated using the Google search engine. Note that results may vary with time. 3 http://lucene.apache.org/ 4 http://ir.dcs.gla.ac.uk/terrier/ 5 Thepersonalizedexamplecanbesimulated(inasimplifiedway)bythequery“javasearchscience fiction library”. The discussed example results are real, testing this withthe Google Web search engine Chapter 1 — Introduction3 example, suppose that before the user issued the query “search library”, she opened a document related to science fiction, or input a query about the “Ender’s game” science fiction book. With this background information the system may infer that the relevant preferences in this particular situationaretheonesrelatedtosciencefictionliterature,whereasthepreferenceforcomputer programming has no clear relation with the current user focus, and can thus be discarded from the personalized processing step in this particular case. In both situations, the system is able at thesametimetotackletheambiguityofthesearch,andtoselectwhichbackgrounduser interestsmatterinthecurrentsituation, achieving resultsthatarerelevanttoboththeuserand her/his situation. Personalizedcontentaccessaimstoalleviatetheinformationoverloadandinformationneed ambiguity problem with an improved Information Retrieval (IR) process, by using implicit user preferences to complement explicit user requests, to better meet individual user needs (Gauch et al.2003;Haveliwala2002;JoseandUrban2006;Kobsa2001).Asexposedintheprevious example,themainmotivationofpersonalizedretrievalsystemsisthatusersoftenfailto representtheirinformationneed,usingnomorethan3keywords(Jansenetal.1998),which oftenleadtoambiguousqueries(KrovetzandCroft1992).Neverthelesstosay,userqueries rarely include the implicit interests of the user. Personalization is being currently envisioned as a major research trend, since classic IR tends to select the same content for different users on the same query, many of which are barely related totheuser’swish(ChenandKuo2000).Sincetheearlydaysuptothelatestprogressinthis area,personalizationhasbeenappliedtodifferentretrievalaspects,suchascontentfiltering (MicarelliandSciarrone2004)andrecommendation(ShethandMaes1993),contentsearch (JehandWidom2003),navigation(Lieberman1995),orcontentpresentation(Sakagamiand Kamba1997).Personalizationalsoisrelevantto manyotherresearchareas,suchaseducation (Brusilovsky etal.1998), digitallibraries(Smeaton andCallan2001),TVmedia(Aroyoetal. 2007), or tourism (Fink and Kobsa 2002), to name a few. Nowadays, major online services such as Google (Badros and Lawrence 2005; Zamir et al. 2005), Amazon.com (Smith et al. 2005) or Yahoo!(Kraftetal.2005)areresearchingonpersonalization,inparticulartoimprovetheir content retrieval systems. Oneofthelessonslearntovertheyears,inparticularwiththepracticalinitiatives,isthatitis very difficult to achieve effective generic personalization solutions, without having considerable knowledge about the particular problem being addressed. These seemed to result in either a very specialized or a rather generic solution that provided very limited personalization capabilities. In ordertoaddresssomeofthelimitationsofclassicpersonalizationsystems,researchershave 4Introduction — Chapter 1 lookedtothenewemergingareadefinedbytheso-calledcontext-awareapplicationsand systems (Abowd et al. 1997). Context-awarenesshasbeenlongresearchedandsuccessfullyappliedinawidevarietyof fields,includingmobileandpervasivecomputing(Chalmers2004),imageanalysis (Dasiopoulouetal.2005),computationallinguistics(Finkelsteinetal.2001),orinformation retrieval(Bharat2000;KimandChan2003;WhiteandKelly2006).ContextinIRhasalso been subject to a wide scope of interpretation and application, ranging from desktop information (Dumaisetal.2003)tophysicaluserlocation(Melucci2005)torecentlyvisitedWebpages (Sugiyama et al. 2004) or session interaction data (Shen et al. 2005b). Theresearchundertakenhereliesattheconfluenceofcontext-awarenessandpersonalization, andaimsatasolutionthatcombinestheadvantagesofthetwoareas.Apersonalization approachthatiscontext-aware,i.e.apersonalizationincontextapproach,shouldbeableto apply personalization in the different areas and retrieval aspects mentioned previously. And, at the same time, it should be aware of the context the user is in when performing a retrieval task. Itshouldbeableto“adapttheadaptationprocess”inordertoprovideamoreeffective,and precise, personalization. In this setting, this thesis focuses on three main areas: a) exploitation of domainknowledge,representedinarichandaccurateform,toenhancethecapabilitiesand performanceofpersonalization,byimprovingtherepresentationofuserpreferences;b) acknowledgeandcopewiththedynamicaspectsofimplicituserpreferences,whichthough stable, do not come into play in a monolithic way in practice, but relative to the user goals, state, ongoing actions, etc.; define a modular context modeling framework, on top of a personalization system,whichcapturestherelativeessenceofuserinterestsinaworkableyeteffectiveway, improving the performance and reliability of the base personalization system, and in particular, reducing the well-known potential intrusiveness of personalization techniques; c) test, evaluate andmeasuretheimprovementachievedbythepersonalizationtechniquesandtheircontext- based enhancement. 1.2Personalization in Context 1.2.1Semantics and Personalization Threemainareasorproblemscommonlyneedtobeaddressedinapersonalizationapproach: the representation, acquisition, and exploitation of user profiles. The user profile can be automatically acquired (or enriched) by monitoring of user’s interaction withthesystem,aslongasthemonitoringperiodissufficientandrepresentativeoftheusers Chapter 1 — Introduction5 preferences.Userprofilelearningaloneisawideandcomplexareaofresearch(Gauchetal. 2007),outofthescopeandcomplementarytotheproblemsaddressedinthisthesis,which focuses on the areas of user profile the representation and exploitation. Representingandexploitinguserpreferencesinaformalwayisnotaneasytask.User preferences are oftenvague (e.g. “I like sports”, “I like travelling”, “I like animals”), complex (e.g. “I like swimming, but only when it’s really hot”, “On rainy days, there’s nothing like going tothecinema”,“IliketravelingtoAfrica,butonlytocountrieswithstablegovernments”),or evencontradictory(e.g.“Idon’tlikesensationalisttabloids,butwhenI’mwaitingformy doctor’s appointment, I like to take a peek to them for a while…”, “I like animals, but I cannot stand anything that resembles a rat”). Typicalsolutionsforuserprofilerepresentationarebasedonstatisticalmethods,whereauser profile is represented as a bag of terms (Liu et al. 2004; Sakagami and Kamba 1997; Teevan et al. 2005). These approaches can be complemented with relations, such as correlation measures (Asnicar and Tasso 1997) or links to topic categories (Liu et al. 2004). However, terms cannot representallthesubtletiesofthepreviousexamples:1)theyareambiguous,forinstance, “Jaguar” can be related to an animal, a car brand, or to an Operative System, 2) their semantics isratherlimited,forinstance,aninterestfor“birds”ingeneralisdifficulttomatchtoa document that is related to the “Woodpecker” without explicitly stating that it is a bird, and 3) theydonotallowtorepresentcomplexpreferencesbasedonrelations.Forinstance,a preferencerepresentedasabagofterms“stablegovernmentAfricancountry”couldbeless likely to match interesting documents than an explicit list of countries that fulfill this restriction. Inthisthesis,weaddressthislimitationbyelaboratingonthesemanticrepresentationofboth userinterestsandmultimediacontent.Ourgoalistoexploittheserepresentationsona personalization approach for content access and retrieval of documents, in which documents are associatedtoasemanticindex,wherecontentisexpressedbymeansofasetofknowledge concepts. Among the possible semantic representation formalisms, ontologies bring a number of advantages(StaabandStuder2004),astheyprovideaformalframeworkforsupporting explicit, machine-processable semantics definitions, and support the inference and derivation of new knowledge based on existing one. Our approach adopts, but is not restricted to, an ontology based grounding for the representation of user profile and content descriptions. The goal of our personalization approach is to prove the advantages of exploiting concepts, and relations among them,forpersonalizedandcontext-awaresystems.Theadvantagesthatwedrawfromthis representation can be summarized as: 6Introduction — Chapter 1 -Richuserprofilerepresentations:Concept-basedpreferencesaremorepreciseand conveymoresemanticsthansimplekeywordterms.Conceptsareunambiguously identifiedtoapieceofcontentortoauserprofile.Forinstance,theconcept “WildAnimal:Jaguar”isuniquelyidentifiedas“Jaguar,theanimalspecies”. Furthermore, concepts can enrich their semantics by means of semantic properties. For instance the concept “Woodpecker” could be related to the “Bird” concept, through the relation “is a subspecies of”. -A formal ontological representation allows the expression of complex preferences: Theformalrepresentationofontologiesallowstheselectionofasetofconceptsby meansofcomplexqueriesorrelations.Previousmentionedcomplexpreferencessuch as"IliketravelingtoAfrica,butonlytocountrieswithstablegovernments"canbe represented in an ontological, formal way. 1.2.2User Context Modeling and Exploitation Similarly to personalization, approaches aiming to achieve context-aware enhancements need to address issues of context representation, acquisition and exploitation. Asinuserprofilerepresentation,contextawaresystemsfacedifficultiesregardingthe representation of the user’s current contextual situation. This representation depends largely on thenotionofcontextthesystemisconsidering.Contextcanbeinterpretedasthephysical locationoftheuser,theopenapplicationsintheuser’sdesktop,orthecontenttheuserhas previouslyinteractedwith,tonameafew.Fromnowon,wewillusetheterm“context”to denote our interpretation of context: the set of themes under which retrieval user activities occur withinaretrievalsession.Followingourinterpretation,contextdescriptionssuchas“I’m researching on tropical birds”, “tomorrow I’m travelling to Zurich” or “today I want to go to the cinema”aredifficulttorepresentinaformalway.Similarlytouserprofiles,contexthasbeen commonly obtained and modeled using term related statistical approaches (Dumais et al. 2003; RocchioandSalton1971;Shenetal.2005b).Thishassimilarlimitationsastheonespointed outforuserpreferencerepresentation.Thus,inourapproachweoptforaconcept-based semantic representation of the user context, in such a way that we have the same representation richness and enhance semantics which statistical approaches are lacking. Context acquisition is also tightly related to the particular interpretation of context which makes it difficult notion to capture and to grasp in a software system. In general, sources of contextual informationareimplicit,i.e.theyarenotdirectlyrepresentedasacharacterizationofthe relevantaspectsoftheuserandhersituation.Itissuchimplicitnatureofcontextwhat Chapter 1 — Introduction7 difficultiesitsacquisition.Aswellasuserprofilelearningapproaches,theusercanexplicitly provide this information to the system, but it is useful to automate this input as far as possible, torelievetheuserfromanextrawork.Contextacquisitiontechniquesbasedonamanual, explicitcooperationoftheuseraremostlybasedonRelevanceFeedbackapproaches(RF),in which the user states which pieces of content are relevant in the current situation. However, to a higherdegreethanexplicittechniquesforuserprofiling,usersareoftenreluctanttoprovide suchinformation(Shenetal.2005b).Themaincauseisthatusershavetoprovidethis information in every interactive session, as the recorded short-term feedback is discarded once the session ends. For this reason, implicit feedback has been widely researched as an alternative in context-aware retrievalsystems(KellyandTeevan2003;White2004b).Implicitfeedbacktechniquesoften rely on monitoring the user interaction with the retrieval system, and extract the apparently most representativeinformationrelatedtowhatisaimingat.Again,typicalimplicitfeedback approachesarebasedonstatisticaltechniques,which,similarlytoRFapproaches,gatherthe mostimportantdocumentsthatrepresenttheuser’scurrentcontext,fromwhichaterm-based representation is built (Leroy et al. 2003; Shen et al. 2005b; Sugiyama et al. 2004). An example oftheimplicitfeedbackmodelistheostensivemodel(CampbellandvanRijsbergen1996). Thismodelhandlesthedriftnatureofcontext,usingatimevariableandgivingmore importancetorecentlyoccurringitemsthanolderones.However,thismodelhasonlybeen applied to a term-based context representation (White et al. 2005b). We propose the notion of semantic runtime context, representing the set of concepts or themes involvedinuseractionsduringanongoingretrievalsession.Weproposeamethodtobuilda dynamicrepresentationofthesemanticcontextofretrievalusertasksinprogress,byusing implicitfeedbacktechniquesandadaptingtheostensivemodelapproachtooursemantic representationofcontext.Thegoalsforourresearchoncontextmodelingcanbesummarized as: -Enhancedrepresentationoftheusercontext:Similartothesemanticrepresentation of the user profile, we aim to build a semantically rich representation of the user context inordertoenablebetter,moremeaningfulandaccuraterepresentationsoftheuser’s contextual situations. -Implicitfeedbackacquisitionoflivesemanticcontext:Wedonotwanttoburden userswithexplicitlyhavingtoprovidetheircontext.Byadaptingexistingimplicit feedbackapproaches,ourgoalistointroduceasemanticacquisitionapproachofuser context, taking also into consideration the drift nature of context. 8Introduction — Chapter 1 Thethirdissueincontext-awareness,namelycontextexploitation,isalsoacomplexresearch problemonitsown.Oncethesystemhasarepresentationoftheusercontext,howtobest exploit it in benefit of the user is not a trivial question. A widely adopted approach is to take this context representation as a short-term interest profile, and exploit it similarly to long-term user profiles in a personalization approach. The main advantages of this approach are that the short- termuserprofileisusuallynarrower,morepreciseandfocusedonthetask,asithasbeen acquiredwiththecurrentsessioninformation,andwrongsystemguesseshaveamuchlesser impact on performance, as the potentially incorrect predictions are discarded after the retrieval session.However,thisapproachdoesnotmakeaclear,explicitdifferencebetweenshort-term and long-term interest. As a consequence, either the wider perspective of overall user trends, or theabilityofthesystemtofocusontemporaryuserpriorities,isoftenlost.Roomfor improvementthusremainstowardscombiningtheadvantagesofpersonalizationandcontext- aware approaches. Ourproposedapproachistousetheusercontextinordertoreducepotentialinaccuraciesof personalizationsystems,whichtypicallyapplytheirpersonalizationalgorithmsoutofcontext. In other words, although users may have stable and recurrent overall preferences, not all of their interestsarerelevantallthetime.Instead,usuallyonlyasubsetisactiveintheuser’smind during an outgoing task, and the rest can be considered as “noise” preferences. Our proposal is toprovideamethodforthecombinationoflong-term(i.e.userprofile)andshort-termuser interests(i.e.usercontext)thattakesplaceinapersonalizedinteraction,bringingtobearthe differential aspects of individual users while avoiding distracting them away from their current specific goals. Many personalized systems do not distinguish the differences between long-term andshort-termpreferences,eitherapplyingthefirstorthelatter,ortreatingbothasthesame. Whatweproposeinthisworkistohaveacleardistinctionbetweenthese,andtomodelhow bothlong-terminterests(i.e.userpreferences)andshort-terminterests(i.e.usercontext)can complementeachotherinordertomaximizetheperformanceofsearchresultsbythe incorporation of context-awareness to personalization. Our approach is based on the exploitation of the semantic representation of context in order to discardthosepreferencesthatareoutofcontextinacurrentsituation.Thissortofcontextual activationofpreferencesisbasedonthecomputationofthesemanticdistancebetweeneach user preference and the set of concepts in the current context. This distance is assessed in terms ofthenumberandlengthofthesemanticpathslinkingpreferencestocontext,acrossthe semantic network defined by a semantic Knowledge Base (KB). Finally, only those preferences Chapter 1 — Introduction9 thatsurpassagivensimilaritythresholdwouldbetakenintoaccountinthepersonalization phase. This approach aims to the following objective: -Complementation of personalization with context awareness: Our definition of user context and preferences allows the combination of both techniques in a single retrieval system. Our proposal of preference contextualization aims at improving the accuracy of personalization techniques, by analyzing the semantic relation between user interest and current user context, and discarding those preferences that could potentially disrupt the userretrievalexperience.Thesemanticrepresentationofbothuserpreferencesand context can enable finding non-explicit relations between context and user interests. For instance,ifthecontextisrelatedtoSports,thesemanticrelationscanbeexploitedto activatepreferencessuchas“Soccer”,andalsopreferencessuchas“RealMadrid”, giventhattheKBhasarelationbetween“RealMadrid”and“Soccer”andbetween “Soccer” and “Sports”. 1.3Contributions The main original contributions of the research presented in this thesis include the following: •A semantic-based personalization framework for information retrieval. A personalization model based on an enhanced semantic representation of user preferences andcontentisdeveloped.Explicitdomainconceptsandrelationsareexploitedtoachieve performance improvements in personalized IR. •A semantic IR context modeling approach. Context is a broad notion in many ways. One of the aims of the research undertaken in this thesisistoidentifyandsynthesizeaparticularsubsetoutofthefullpotentialscopeand variabilityoftheterm,conciseenoughtobeapproximated(represented,obtained,and applied),butpowerfulenoughtoenablespecificimprovementsinIRperformance. Similarlytothepersonalizationframework,weproposeasemantic-orientedmodelfor contextrepresentation,basedonexplicitdomainconceptsdefineduponanontological grounding.Ontopofthis,acontextacquisitionmodelisdefined,basedonimplicit feedbacktechniquesand ostensivemodels,wheretheusercontextisdefinedasthesetof background themes or topics involved on a user session. 10Introduction — Chapter 1 •A user preference contextualization approach. Anapproachtothecontextualizationofuserpreferencesisproposed,basedona combination of long-term and short-term user interests. The proposed strategy is consists of asemanticexpansiontechnique,definedasaformofConstraintSpreadingActivation (CSA),exploiting semantic relations in order to find the preferences that are (semantically) related to the live user context, and thus relevant for the retrieval task at hand. •Research of experimental evaluation methods for personalized and contextual IR. Inordertoevaluatetheproposedcontextualpersonalizationapproach,atwostep evaluation methodology is followed. The aim of the proposed experimental methodology is toachieveafairbalancebetweenafinegrainedandreproduciblescenariobased evaluation, and an objective and more general user centered evaluation. Thisthesisincludesastrongevaluationcomponentoftheproposedapproach.The.Evaluation ofbothpersonalized(YangandPadmanabhan2005)andinteractiveIRsystems(Yangand Padmanabhan2005)isknowntobeadifficultandexpensivetask.Ontopofthat,aformal evaluation of a contextualization technique may require a significant amount of extra feedback fromusersinordertomeasurehowmuchbetteraretrievalsystemcanperformwiththe proposedtechniquesthanwithoutthem.Totacklethisevaluationcomplexity,weintroducea twostepevaluationmethodology:1)asubjectivebutfinegrainedevaluation,basedon simulated scenarios and 2) and objective and user oriented performance evaluation, in order to test the validity of a both personalized and interactive approach. 1.4Outline This thesis is structured in five main Chapters. In Chapter 2 we overview the context of our work. We survey related work on the State of the Art of personalized and context-aware retrieval systems. This survey includes a comprehensive categorizationofpreviousrelatedwork,inwhichwehighlightthemaincharacteristicsonthe conceptualization of user interests and/or context of the surveyed proposals. In Chapter 3 we describe our personalization framework, based on a conceptual representation ofuserinterests.Themaincharacteristicofthispersonalizationframeworkisaconcept-based representationofuserinterests,inwhichuserprofilesarerepresentedasasetofweighted conceptvectors.Adoptingaprobabilisticapproach,theconceptweightscorrespondtothe intensityofuserinterest(oruserdislike,incaseofnegativevalues)foreachconceptofthe Chapter 1 — Introduction11 ontology. A Personal Relevance Measure (PRM) score computation technique for content items is introduced. This approach is based on the concept-vector similarity between the user profile and the concept vector representing the content item, obtained from the semantic index. InChapter4weintroducethecorepartofthisthesis:theapplicationofcontextintoour personalization framework. Firstly, the model for the semantic based representation of the user contextispresented.Thisrepresentationmodel,aswellastheuserpreferencemodel,isbased onaweightedconceptvector,whereeachweightvaluerepresentstheprobabilitythatthe conceptintheontologyisrelatedtothecurrentcontext.Secondly,weintroduceourapproach forlivesemanticusercontextacquisition.Thisapproachisbasedonanadaptationofthe ostensivemodel(CampbellandvanRijsbergen1996)toasemanticindex.Theacquisition techniquemonitorsuserinteractionswiththeretrievalsystemduringthecurrentsession(e.g. userqueriesandopenedcontent),extractingforeachinteractionsteptheconceptsrelatedto each action. Finally, an approach for the contextualization of preferences will be proposed. This approachconsistsinasortoffuzzyintersectionbetweenuserpreferencesandcontext,by exploiting the semantic relations of the KB with a probabilistic model. InChapter5weevaluatetheperformanceofourproposals.Wesurveythemostimportant evaluationmethodologiesregardingadaptiveandinteractiveretrievalsystemsinorderto providereasoningforourownevaluationmethodology.Ourevaluationmethodologyisbased ontheextensionofsimulatedtasksituations(Borlund2003),byincludingasetofuser preferencesandahypotheticalcontextualsimulation.Wepresentatwostepevaluation approach.Afirstscenario-basedmethodology,inwhichuserpreferencesandtheinteraction modelaresimulated,andasecondusercenteredapproach,inwhichuserpreferencesare provided manually by users and users interact freely with our experimental retrieval system InChapter6weprovidetheconclusionofthisthesis,togetherwithfurtherdiscussionand future work to be addressed in order to complement our proposal. Chapter 2 2 State of the Art Theaimofthissectionistogatherandevaluateexistingtechniques,approaches,ideas,and standardsfromthefieldofusermodeling,personalization,andcontextawaresystems. However,wewillonlyfocusoncontent-basedsystems,excluding,forinstance,itembased collaborative recommendation systems (Schafer et al. 2007). We have also added a selection of content-based recommendation systems, which share similar characteristics with the system that will be introduced in the following sections such as compute a personalization score, based on the similarity between the user interests and a document. 2.1Personalized Information Retrieval Duetothemassiveamountofinformationthatisnowadaysavailable,theprocessof informationretrievaltendstoselectnumerousandheterogeneousdocumentsasresultofa singlequery;thisisknownasinformationoverload.Thereasonisthatthesystemcannot acquireadequateinformationconcerningtheuser'swish.Traditionally,InformationRetrieval Systems (IRSs) allow the users to provide a small set of keywords describing their wishes, and attempttoselectthedocumentsthatbestmatchthesekeywords. Themajorityofthesequeries areshort(85%ofuserssearchwithnomorethan3keywords(Jansenetal.1998))and ambiguous(KrovetzandCroft1992),andoftenfailtorepresenttheinformationneed, nevertheless to say to represent also the implicit interests of the user. Although the information contained in these keywords rarely suffices for the exact determination of user wishes, this is a simple way of interfacing that users are accustomed to; therefore, there is a need to investigate waystoenhanceinformationretrieval,withoutalteringthewaytheyspecifytheirrequest. Consequently, information about the user wishes needs to be found in other sources. The earliest work in the field of user modeling and adaptive systems can be traced back to the late70’s(seee.g.(Perraultetal.1978;Rich1998)).Personalizationtechnologiesgained significanceinthe90’s,withtheboostoflarge-scalecomputingnetworkswhichenabledthe deployment of services to massive, heterogeneous, and less predictable end-consumer audiences (Hirshetal.2000).Oneofthemainbooston personalizationapproachescameinthemid-late 90’s with the appearance of personalized news access systems (Bharat et al. 1998; Lang 1995; SakagamiandKamba1997)andpersonalizedinformationagents(ChenandSycara1998; 14State of the Art — Chapter 2 Lieberman1995;Widyantoroetal.1997).Significantworkhasbeenproducedsincetheearly times in terms of both academic achievements and commercial products (see (Brusilovsky et al. 1998; Fink et al. 1997; Kobsa 2001; Montaner et al. 2003) for recent reviews). The goal of personalization is to endow software systems with the capability to change (adapt) anyaspectoftheirfunctionalityand/orappearanceatruntimetotheparticularitiesofusers,to better suit their needs. To do so, the system must have an internal representation (model) of the user.Itiscommonintheusermodelingdisciplinetodistinguishbetweenusermodel representation,usermodellearning/update,andadaptationeffectsorusermodelexploitation. Personalization of retrieval is the approach that uses the user profiles, additionally to the query, inordertoestimatetheuser’swishesandselectthesetofrelevant documents(ChenandKuo 2000). In this process, the query describes the user’s current search, which is the local interest (Barry 1994), while the user profile describes the user’s preferences over a long period of time; we refer to the latter as global interest. The method for preference representation and extraction, as well as the estimation of the degree to which local or global interests should dominate in the selectionofthesetofrelevantdocuments,arestillopenresearchissues(WallaceandStamou 2002). Aspectsofsoftwarethathavebeensubjecttopersonalizationinclude,amongothers,content filtering(MicarelliandSciarrone2004),sequencing(Brusilovskyetal.1998),content presentation(DeBraetal.1998),recommendation(ShethandMaes1993),search(Jehand Widom2003;Liuetal.2004),userinterfaces(Eisensteinetal.2000;Hanumansetty2004; Mitrovic and Mena 2002), task sequencing (Vassileva 1997), or online help (Encarnação 1997). Typicalapplicationdomainsforusermodelingandadaptivesystemsincludeeducation (Brusilovskyetal.1998;DeBraetal.1998;TerveenandHill2001;Vassileva1997),e- commerce(ArdissonoandGoy2000;FinkandKobsa2000),news(Bharatetal.1998;Sheth andMaes1993;Widyantoroetal.1999),digitallibraries(Callanetal.2003;Smeatonand Callan 2001), cultural heritage (Ardissono et al. 2003), tourism (Fink and Kobsa 2002), etc. The field of user modeling and personalization is considerably broad. The aim of this section is not to provide a full overview of the field, but to report the state of the art on the area related to this work, i.e. personalized content-based retrieval, recommendation and filtering. Thenextsub-sectionswillsummarizeapproachesforretrievalpersonalization,classifiedby wherethepersonalizationalgorithmisappliedinthesearchenginealgorithm.Table2.1 classifies the most important studied proposals. In the next sections we will provide an overview foreachclassification(i.e.representation,learningandexploitation).Representationcolumn shows the representation approach of the user profile. Learning column classifies the used user Chapter 2 — State of the Art15 profilelearning.Thelastcolumn,exploitation,showswhichtechniqueisusedforthe personalizationphase.Otherclassificationsofpersonalizationsystemscanbefoundat (Adomavicius and Tuzhilin 2005; Micarelli et al. 2007; Montaner et al. 2003). REFERENCEREPRESENTATIONLEARNINGEXPLOITATION (Ahn et al. 2007)TermsHybridDocument weighting (Aroyo et al. 2007)ConceptsNoneDocument weighting (Asnicar and Tasso 1997)TermsExplicitDocument weighting (Billsus and Pazzani 2000)TermsHybridDocument weighting (Chakrabarti et al. 1999)ConceptsExplicitLink-based (Chen et al. 2002)ConceptsImplicitDocument weighting (Chen and Kuo 2000)TermsImplicitQuery operations (Chen and Sycara 1998)TermsExplicitQuery operations (Chirita et al. 2005)ConceptsExplicitDocument weighting (Chirita et al. 2006)TermsImplicitQuery expansion (Dou et al. 2007)ConceptsImplicitDocument Weighting (Gauch et al. 2003)ConceptsHybridDocument weighing (Haveliwala 2002)Usage HistoryImplicitLink-based (Jeh and Widom 2003)DocumentsImplicitLink-based (Kerschberg et al. 2001)ConceptsExplicitDocument weighting (Koutrika and Ioannidis 2005)TermsExplicitQuery operations (Krulwich and Burkey 1997)TermsExplicitQuery operations (Lang 1995)TermsExplicitDocument weighting (Lieberman 1995)DocumentsImplicitDocument weighting (Liu et al. 2004)TermsImplicitQuery operations (Ma et al. 2007)ConceptsNoneDocument weighting (Martin and Jose 2004)DocumentsExplicitQuery operations (Micarelli and Sciarrone 2004)StereotypesExplicitDocument weighting 16State of the Art — Chapter 2 (Middleton et al. 2003)ConceptsExplicitDocument weighting (Noll and Meinel 2007)ConceptsImplicitDocument weighting (Pitkow et al. 2002)ConceptsImplicitDocument weighting (Sakagami and Kamba 1997)TermsImplicitDocument weighting (Seo and Zhang 2001)TermsHybridDocument weighting (Shen et al. 2005b)TermsImplicitQuery operations (Sieg et al. 2007)ConceptsExplicitDocument weighting (Speretta and Gauch 2005)ConceptsImplicitDocumentweighting (Sun et al. 2005)OtherImplicitDocument weighting (Sugiyama et al. 2004)TermsImplicitQuery operations (Tan et al. 2006)Usage HistoryImplicitQuery operations (Tanudjaja and Mui 2002)ConceptsExplicitLink-based (Teevan et al. 2005)TermsImplicitQuery operations (Widyantoro et al. 1997)TermsExplicitDocument weighting (Yuen et al. 2004)Usage HistoryImplicitDocument weighting (Zigoris and Zhang 2006)DocumentsNoneDocument weighting Table 2.1.Overview of personalized information retrieval systems. 2.1.1User Profile Representation Any Personalization system has some form of internal representation of each users’ preferences. Broadly speaking, the user profile represents which general or global interests the user has that can be exploited by the system in order to adapt the information retrieval mechanism. The ways that this profile is exploited are varied, e.g. the system can use the user preferences to refine the user’s query, to adapt the navigation through the information or to adapt the presentation of the content, more details on the exploitation of the user profile will be introduced in a latter section (section 2.1.3). The most common approach for user profile representation is the bag of terms approach, where user interests are represented as a set of terms. Other systems try to add more semantics to this representation,byrepresentingtheuserprofilewithasetofconcepts.Theseconceptshave somekindofbackgroundknowledge,whichusuallyaddsnewrelationsbetweenconcepts. Chapter 2 — State of the Art17 Otherapproachesareitem-based,i.e.theuserprofileisrepresentedasasetofdocumentsthat the user has interest in (e.g. a set of bookmarks or documents), the personalization system will try to extract interests from these documents’ content or to use intra-document relations to find moreinterestingdocuments(seesection2.1.3).Anotherimportantapproachiscollecting interaction information of the user with the retrieval system. Typically this is done by collecting clickthrough data of past interaction of the users, which hopefully can be interpreted as interests of the user. Terms Bagoftermsisthemostcommonwayofrepresentingauserprofile,probablybecausefits bettertheclassicInformationRetrievalparadigm(SaltonandMcGill1986),whereboth documentsandusers’needofinformation(i.e.queries)areexpressedasaweightedvectorof terms.Theuserprofileinthiscaseisthusrepresentedinasimilarway,byexpressinguser profiles as a set of weighted terms. As Table 2.2 shows, the majority of these systems use this approach.Systemsthatmakeuseofsimple,non-weightedtermsfortheuserprofile representationusuallycomplementthisapproachbyaddingsomesemanticrelationstothe representation. This is the case of the ifWeb system (Asnicar and Tasso 1997) where the terms oftheprofilearelinkedbydocumentcorrelation.SimilarlytotheifWebsystem,Liuetal (2004) link terms by correlation, but in this case the correlation are based on co-occurrence on a predefinedsetofcategories,obtainedfromtheOpenDirectoryProject(ODP) 6 .Chiritaetal. (2006) cluster the terms extracted from the documents of the user’s desktop environment. Terms are only weighted by term frequency, this is how many times the term appear in the document/s. The authors claim that in this case the “rareness” factor of a term, i.e. the idf (inverse document frequency,calculatedasthenumberofdocumentscontaintheterm)shouldnotbeused,asa termcanbeverycommon’sononeuser’sdesktop,whereasbeingveryrareinothercorpora (e.g. the WWW).Chen and Sycara (1998) also cluster the term vectors, by allowing N different termvectorprofile,eachoneintendedforadifferentprofile,theclusteringtechniqueismore basic,basedonthecosinesimilaritybetweeneachdomainvectorprofileandthevector representationofthedocumenttobeadded.KoutrikaandIoannidis(2005)linktermswith logicaloperators,whichindicateoperationsofnegation,additionorsubstitutionrelatedtoa giventerm.Forinstance,ifauserisinterestedintechnology,theuserprofilecouldhavethe term ‘Apple’ together with a link of addition for the term ‘computer’, which will in some way “categorize”thatconcept.Somehowsimilartothiswork,theInfoFinderagent(Krulwichand 6 Open Directory Project (ODP) is a public collaborative taxonomy of the WWW: http://dmoz.org/ 18State of the Art — Chapter 2 Burkey1997)processestheinterestingdocumentsfortheuserwithanID3learningalgorithm and constructs a decision tree with the most important terms as nodes. The tree can be exploited tocreateontheflybooleanpersonalizedqueries.TheAISsystem(BillsusandPazzani2000) also applies machine learning techniques, using the extracted terms from the visited documents, weighted by frequency of appearance, as a feature of a Bayesian network classifier. Insection2.1.3wewillseethattherearedifferentwaysofexploitingaprofilebasedonterm vectors,although,followingtheclassicvectorspacemodel,itisverycommontouseacosine similarity measure in order to compute the similarity of a document to the user profile, similarly to how the similarity is computed given a query and a document (Salton and McGill 1986). REFERENCEREPRESENTATIONADDED SEMANTIC (Ahn et al. 2007)Weighted termsNone (Asnicar and Tasso 1997)TermsTerm-Document correlation (Billsus and Pazzani 2000)Weighted termsBayesian network (Chen and Kuo 2000)TermsNone (Chirita et al. 2006)Weighted termsClusters (Chen and Sycara 1998)Weighted termsClusters (Koutrika and Ioannidis 2005)Terms Logical operators (Krulwich and Burkey 1997)TermsDecision Tree (Lang 1995)Weighted termsNone (Liu et al. 2004)TermsTerm-Category correlation (Sakagami and Kamba 1997)Weighted termsNone (Seo and Zhang 2001)Weighted termsNone (Shen et al. 2005b)Weighted termsNone (Sugiyama et al. 2004)Weighted termsNone (Teevan et al. 2005)Weighted termsNone (Widyantoro et al. 1997)Weighted termsNone Table 2.2.Overview of term-based user profile representation systems. Chapter 2 — State of the Art19 Concepts Aconceptis“anabstractorgenericideageneralizedfromparticularinstances” 7 .Concept instancesobtaintheirmeaning(theirsemanticmeaning)throughrelationstootherconceptsor othertypesofentities,suchasdocumentsorliterals(e.g.terms).Forinstance,acategoryon Yahoo! Directory 8 or in the ODP can be considered a concept that is defined by 1) the label of the concept, e.g. ‘Microsoft Windows’, 2) the Web documents related to the category and 3) the parentconcepts,e.g.‘OperatingSystems’,‘Computers’,andthechildrenconcepts,e.g. ‘WindowsXP’,‘WindowsVista’.Inthisworkwewanttostressthedifferencebetween taxonomies and ontologies. Taxonomies are a subset of an ontology, and represents a collection of concepts that are ordered in a hierarchical way, an example of taxonomy is ODP. Ontologies, amongotherformalspecifications,andusingalooselyinterpretation,allowthedefinitionof non-taxonomicalrelation,whichareabletorelateconceptsbyrelationsthatdon’tindicate supertype-subtyperelations.Forinstance,inthepreviousexampleanot-taxonomicalrelation couldbe‘WindowsVista’ cumpet|tur =========‘MacOSX’.Table2.3showstheclassificationofeach concept-based system. Note that although some approaches claim to use ontology concepts, we haveclassifiedasontologicalconceptsthosesystemsthatatleastmakeuseofthenon- taxonomicalrelations.Thedefinitionofconceptsandtheirsemanticrelationsareusually contained in KBs, which are usually static or semi-static. Example of KBs are ODP, any ad-hoc createdontologyortaxonomyandevenfolksonomies(Mathes2004),createdbythe collaborative tagging of users over a document corpus. 7 Second entry, Merriam Webster dictionary 8 http://dir.yahoo.com/ 20State of the Art — Chapter 2 REFERENCECONCEPTSKNOWLEDGE BASE (Aroyo et al. 2007)OntologyAd-hoc (Chakrabarti et al. 1999)TaxonomyAd-hoc (Chen et al. 2002)Weighted TaxonomyAd-hoc (Chirita et al. 2005)TaxonomyODP, top 16 (Dou et al. 2007)Weighted TaxonomyAd-hoc (Gauch et al. 2003)Weighted TaxonomyYahoo! directory, ODP (Kerschberg et al. 2001)TaxonomyAd-hoc (Ma et al. 2007)TaxonomyODP (Middleton et al. 2003)TaxonomyDigital Library (Noll and Meinel 2007)FolksonomyCollaborative (Pitkow et al. 2002)Weighted TaxonomyODP, top 1000 (Sieg et al. 2007)Weighted TaxonomyODP (Speretta and Gauch 2005)Weighted TaxonomyODP (Tanudjaja and Mui 2002)Weighted TaxonomyODP Table 2.3.Overview of concept-based user profile representation in personalized retrieval systems. Oneofthemostexploited taxonomyKBinpersonalizedsystemsisODP,mainly becauseisa widely adopted taxonomy with thousands of contributors, has a rich number of concepts and has amassamountofWebdocumentsrelatedtoconcepts,whicheasesitsunderstandingand exploitation by personalized systems. Gauch and Speretta represent the user profile by weighted topicsfromtheODPandYahoo!directorytaxonomiesintworelatedpersonalization approaches(Gauchetal.2003;SperettaandGauch2005).Pitkowetal.(2002)andSiegetal. (2007)alsousedweightedtopicsfromODP,buttheylimitedtheprofiletothetop1000 categories.Chiritaetal.(2005)hadtolimittootheamountoftoptopicstobeusedto16,as theycomputedavariationofthePageRankalgorithm(BrinandPage1998)foreverytopic, whichwascomputationallyexpensive.Maetal.(2002)representedtheprofilebyasetof topics,althoughthisprofilewasobtainedbymappingaterm-basedprofile.Theyfollow differentheuristicsforthismapping,checkingifthetermmatchestoanycategorynameand, otherwise, checking the similarity of the term, or similar terms, to each category profile textual Chapter 2 — State of the Art21 description. Tanujada and Mui (2002) did not weight the ODP topics in the profile, but they did allowindicatinganegativeorpositivepreferenceforeachtopic.Adigitallibraryretrieval system introduced by Middleton et al. (2003) made use of the existent digital library taxonomies toconstructtheuserprofile.Othersystemsbuildanad-hoctaxonomy.Thisisthecaseofthe Personal View Agent system (Chen et al. 2002) and the Focused Crawler system (Chakrabarti et al. 1999) which represent the user profile as a set of weighted topics from an ad-hoc predefined taxonomy. The system introduced by Dou et al. (2007) use a predefined classification scheme of 67categoriesgivenbytheirautomaticclassificationtechnique.IntheWebSifterIIsystem (Kerschbergetal.2001)theusersareabletocreatetheirowntopicsontheirownpersonal taxonomy. Aroyoetal(2007)presentapersonalizedsystem,intheTVpersonalizedaccessdomain, exploitsnon-taxonomicrelationoveranontologicalKB.Inthiscasetheuserprofileis constructed by concepts such as time of day, genre, or location. The genre concepts belong to a taxonomy similarly to the latter systems. However, the system also uses geographical and time ontologiesinordertoreasonwhichtimeofdayis‘Fridayafternoon’referredtoorwhatlocal content belong to the location ‘England’. To the best of our knowledge, and at time of writing, therearenomoreexamplesofpersonalizedsystemthattrulyexploitsontologyKBsina personalization system. FolksonmiesareKBscreatedcollaborativelybythetaggingactionsofusers,overadocument corpus (Mathes 2004).Noll and Meinell (2007) demonstrate that user profile based on these tag corporacanbeusefulforapersonalizedsystems.Thefinaluserprofileismodeledasasetof weighted tags from the folksonomy, where the weight is given by the frequency of the tag in the user’s tag set, following the hypothesis that the user’s frequent used tags are more representative of her interests. Usage History User profiles based on usage history represent previous interactions of the user with the system. These approaches’ hypothesis rely on that previous information about the interaction of the user withthesystemcanprovideusefulinformationtoextractinterestsfromtheuser.This hypothesis is shared with many other systems, which exploit this kind of implicit information in ordertoconstructtheuserprofile(seesection2.1.2,implicitinformation).However,the systems here directly model the user profile as usage data, whereas other systems only consider it as one more step of the user profile learning process. 22State of the Art — Chapter 2 Usage history in retrieval systems is often seen as the clickthrough data. The clickthrough data isnormallymodeledasthequerythattheuserexecutedintothesystem,thereturned (multimedia) documents, and the subsequent documents that the user opened to view. In such a waythatthisinteractioncanleadtowhatdocumentscouldhavebeenimportanttotheuser giventhequery.Onesimplificationoftheclickthroughdataisstoringonlytheuserqueries, withnointeractioninformationwhatsoever.Althoughtheuserprofilesarerepresentedasthis usage history information, the personalization system usually apply some kind of preprocessing over this profiles, during the exploitation process. Table 2.4 shows the type of usage history and preprocessing procedure of these systems. REFERENCETYPE OF USAGEPREPROCESSING (Haveliwala 2002)QueriesClassification (Tan et al. 2006)ClickthroughLanguage Model (Yuen et al. 2004)QueriesBayesian Table 2.4.Overview of usage history based user profile representation in personalized retrieval systems. Haveliwala(2002)suggestsusingpastqueriesinordertobiastheactualqueryoftheuser towardsagiventopicorother.Yuenetal.(2004)takeadifferentapproachbyusingthepast queries as the test set of a Bayesian network that classifies terms into relevant or not to the user. Finally,Tanetal.(2006)computealanguagemodeloverthehistoricaluserprofileinsucha waythatcanbecombinedwiththelanguagethelanguagemodeloftheactualqueryinthe personalization step. Documents Userprofilesbasedondocumentscontainthoseitemsthatarerelevanttotheuser.Themain applicationofthistypeofprofilesistofinddocumentsthataresimilartothosestoredinthe userprofile.TheLetiziasystem(Lieberman1995)usesthisapproachbycomputingaterm vectorsimilaritybetweendocumentsintheprofileandthoseintheretrievalspace.Jehand Widom(2003)interpretthebookmarksoftheuserasadocument-baseduserprofile,tryingto exploitthelinkstructureoftheWebtofindsimilardocuments.ZigorisandZhang(2006) representthisprofileasagraph,whereusersarenodesthatlinktothepreferreddocuments. Thisgraphisusedasinputofdifferentmachinelearningapproaches.MartinandJose(2004) defined a workspace where the user can stored their interesting documents. In order to establish Chapter 2 — State of the Art23 manual relations between documents, they define the concept of bundle, similar to the concept of folder in an Operating System, where the user can store related documents. Stereotypes Stereotypingusersisawellknownapproachforusermodeling.Stereotypesarecreatedeither manuallyorbyminingorclusteringtheinformationofagroupofusers.Thegoalofa stereotype is to represent the interests of a group of users, in such a way that personalizing the systemwithagivenstereotypewillmaximizethequalityofpersonalizationforthisgroup. MicarelliandSciarrone(2004)followthisapproachbyhavingapredefinedsetofstereotypes (e.g.Javaprogrammer,scientist,MachineLearningresearcher).Thesystemcanbeseenasa combinationofstereotypesandterm-basedprofiles,aseachstereotypeisfurtherdefinedbya set of weighted terms. Others Sunetal.(2005)processtheclickthroughinformationwithaSingularValueDecomposition technique.Theyadaptadimensionreductionapproach,knownasSingularValue Decomposition (Furnas et al. 1988), in order to process the clickthrough information, which can beseenastripletsandcalculatenewweightedpreferencetripletsfor bothnewqueriesanddocuments.Therefore,thefinaluserprofileisanewsetofweighted triplets which will be used in the profile exploitation phase. 2.1.2User Profile Learning Userprofilelearningcanbeconsideredasingleresearchareabyitself,thisareastudiesthe differenttechniquestoacquiretheuserinterests(Gauchetal.2007).Oneofthemain characteristic on user profile learning approaches is if they rely on an explicit interaction from theuser(explicitapproaches)oriftheycollectthisinformationwithoutneedinganyextra interaction of the user. The main advantage of explicit feedback systems is that there is a higher degreeofconfidenceonthecollectedinformation,asistheproperuserwhogivestheinterest informationthesystemisadaptingto.Theproblemwithexplicittechniquesisthatusersare oftenreluctanttoprovidethisextrainformationtothesystemastheyrequireanextraeffort fromtheuser(Shenetal.2005b).Ontheotherhand,implicitfeedbacktechniques(Kellyand Teevan2003)donotneedtoburdenuserswiththisextrainformation.Althoughsomestudies indicate that implicit indicators can becomparable to explicit (Claypool et al. 2001; Sakagami andKamba1997;ThomasandHawking2006),otherstudiespointsthatthequalityofthese 24State of the Art — Chapter 2 depend on the searcher’s experience, the retrieval task at hand and the retrieval system (White et al. 2005a). Explicit Feedback User Profile Learning Theeasiestmechanismforobtainingtheuserprofileexplicitlyfromtheuserisbydirectly letting the user manually edit or modify the profile. The personalized system can present a set of predefined terms/concepts of the user profile, and next allow the user assign interest weights for each or some concepts (Chirita et al. 2005; Micarelli and Sciarrone 2004), or even let the user constructtheprofileentirely(Kerschbergetal.2001).However,arecentstudyconcludedthat these approaches did not necessarily resulted on an increase of performance of the personalized system (Ahn et al. 2007). Another type explicit information that needs, in general,less interaction from the user, is asking theuserwhichdocumentsmatchtheirinterest,thiscanbedoneeitherbyaskingforasetof examplepreferredcontent(AsnicarandTasso1997;Chakrabartietal.1999;Krulwichand Burkey1997;MartinandJose2004;Siegetal.2007),orbyusingrelevancefeedback techniques(RocchioandSalton1971),onwhichtheuserindicatesrelevantdocumentduring theretrievaltask,e.g.byindicatingwhichdocumentsreturnedbyaqueryweresubjectively relevant(ChenandSycara1998;KoutrikaandIoannidis2005;Lang1995;Middletonetal. 2003;TanudjajaandMui2002;Widyantoroetal.1997).Oncetheinterestingdocumentsare provided, and if the system’s profile is not based solely on documents, there’s have to be some typeofpreprocessingtoconstructtheuserprofile.Typicalapproachesareextractingthose relevanttermsandaddingthemtotheuserprofile,normallyusingstatisticaltechniques (Asnicar and Tasso 1997; Koutrika and Ioannidis 2005), and even allowing negative interest to beadded(Widyantoroetal.1997).Systemsbasedontaxonomicalprofilesnormallyuse classificationtechniques,ortheunderlyingKB,toupdatetheuserpreferences(Chakrabartiet al.1999;Middletonetal.2003;Siegetal.2007;TanudjajaandMui2002).Othersystemuse these documents as input documents for their machine learning modules (Krulwich and Burkey 1997; Lang 1995). Table 2.5 presents a summary of these explicit techniques. Chapter 2 — State of the Art25 REFERENCEINTERESTPREPROCESSING (Asnicar and Tasso 1997)Document/ExampleTerm extraction (Chakrabarti et al. 1999)Document/ExampleClassification (Chirita et al. 2005)User ProfileNone (Chen and Sycara 1998)Document/FeedbackTerm extraction (Kerschberg et al. 2001)User ProfileNone (Koutrika and Ioannidis 2005)Document/FeedbackTerm extraction (Krulwich and Burkey 1997)Document/Example Machine learning (Lang 1995)Document/FeedbackMachine learning (Martin and Jose 2004)Document/ExampleNone (Micarelli and Sciarrone 2004)User ProfileNone (Middleton et al. 2003)Document/FeedbackClassification (Sieg et al. 2007)Document/ExampleClassification (Tanudjaja and Mui 2002)Document/Feedback Classification (Widyantoro et al. 1997)Document/Feedback Term extraction Table 2.5.Overview of explicit feedback learning in personalized systems. Implicit Feedback User Profile Learning Implicit feedback personalization systems have to monitor the user interaction with the system. Thelearningmodulesofthesesystemsareusuallybasedonclickthroughdata:whichqueries theuserhasexecutedpreviouslyandwhichdocumenttheuseropened.Implicitfeedback techniquesarebasedonimplicitrelevanceindicator,whichdecidewhenadocumentwas relevanttotheuser,withoutanexplicitindication.Table2.6showstheclassificationofthe implicit feedback techniques used by these systems. Systems can monitor the browsing actions from the user (Chen et al. 2002; Sakagami and Kamba 1997; Sugiyama et al. 2004; Yuen et al. 2004). The early system Anatagonomy (Sakagami and Kamba 1997) exploited the user opened documentsbyapplyingaclassificationtechniqueinordertocreatethetaxonomybaseduser profile.Theyused2implicitindicatorsofrelevancy,theactionofscrollinganarticle,andthe action of ‘enlarging it’ (i.e. opening the document in a single window). The Letizia Web system, Lieberman(2005)hadaslightlydifferentapproach:apartfromopeningadocumentasan indicatorofinterest,thefactthatauserclickedalinkindicatedaninterestforthecurrent 26State of the Art — Chapter 2 document. Chen et al. (2002) used another type of relevancy indicator: a threshold of 2 minutes of viewing time, similarly to the Anatagonomy system, they classified the interesting documents intotheuserprofile.Sugiyamaetal(2004)alsousedaviewingthresholdindicator,although theirs was normalized by the number of terms on the document (0.371 seconds per term). They usedtermextractiontechniquesinordertoaddnewweightedtermsfromtheinferred interesting documents into the user profile. Yuen et al. (2004) solely use the opened document action,withnorelevancyindicators,andtheydidn’thavetoprocessthisinformation,asthey usedthedocumentsdirectlyasinputfortheirBayesiannetwork,usedfortheprofile exploitation. Clickthroughdatacomplementsbrowsinghistorywiththeinformationofthepastqueriesthat producedtheresultsetsofdocuments.Thisisoneofthemostcommonsourcesofimplicit feedback.However,Haveliwala(2002)suggestedthatpastquerythemselvescouldbeenough to create the user profile model, in this case based on ODP categories and updated by means of classificationtechniques.Itisimportanttodifferentiatethosesystemsthatexploitthewhole interactioninformation(marked with an * on Table 2.6),i.e. the relations launched query Æ interacteddocument,tothosesystemwhotreatqueriesanddocumentsasseparatedsourcesof implicitinformation.ChenandKuo(2000)exploittherelationquery-documentbyusinga correlation matrix between the issued queries and the opened documents are created. Dou et al. (2007)directlyusetheobtainedclickthroughdatainordertosearchforsimilardatafrom previoususersinacollaborativerecommendationapproach,combiningtheprobabilityof previousclicksbytheuserwithotherusers,andalsoaddingatopicsimilaritybetween documents.Sunetal.(2005)processtheadaptSingularValueDecompositiontechniquein ordertoprocesstheclickthroughinformation,whichcanbeseenastriplets , used as the tensors for the matrix dimension reduction technique. Tan etal.(2006)usequeriesandclickeddocumentsinordertorepresenttheusagehistorybya language model. Regarding systems that do not use the interaction information, Liu et al. (2004) andSperettaandGauch(2005)usethepastqueriesandopeneddocumentsasinputoftheir classificationapproachfortheprofileconstruction, whilstShenetal.(2005)andTeevanetal. (2005)extractthetermsfromthesepastqueriesandaccesseddocuments.Thelatter,beinga desktop search application, also considers actions such as creating a document. AnothersourceofimplicitinformationisgivenbyNollandMeinel(2007).NollandMeinel usedthetaggingsetofauserinordertocreatefinalprofile,bycalculatingtagfrequenciesas theimportanceofeachtagtotheuser.Howevertheactoftaggingapieceofcontentcanbe considered implicit or explicit depending on the final goal of the user. For instance, if the only Chapter 2 — State of the Art27 intendofthetaggingactionistofacilitatethepersonalizationsystemlearning,itcanbe consideredanexplicitaction.InthecaseofNoll andMeinel’sapproachthetagsareextracted fromothertagcorpora,suchasbookmarkingandcontentservices,thustheusershadalready realized the tagging action with a different goal. Chirita el al. (2006) make use of a corpus that exemplifies the interests of the user, in their case they use the desktop documents asthesource of the user profile. They extract terms related to this corpus and apply clustering techniques, testing with several term extraction techniques such asdocumentsummarization,sentenceselection,centroidtermcalculationorNLP(Natural Language Processing) techniques. 28State of the Art — Chapter 2 REFERENCEIMPLICIT ITEMPREPROCESSING (Chirita et al. 2006)Personal corpusData mining (Chen et al. 2002)Opened documentsClassification (Chen and Kuo 2000)Clickthrough*Term correlation (Dou et al. 2007)Clickthrough*Proability/Classification (Haveliwala 2002)Past queriesClassification (Jeh and Widom 2003)Bookmarksnone (Liu et al. 2004)ClickthroughClassification (Lieberman 1995)BookmarksTerm extraction (Noll and Meinel 2007)TaggingTag frequency (Pitkow et al. 2002)Opened documentsClassification (Sakagami and Kamba 1997)Browsing historyTerm extraction (Shen et al. 2005b)ClickthroughTerm extraction (Speretta and Gauch 2005)ClickthroughClassification (Sun et al. 2005)Clickthrough*SVD (Sugiyama et al. 2004)Browsing historyTerm extraction (Tan et al. 2006)Clickthrough*None (Teevan et al. 2005)ClickthroughTerm extraction (Yuen et al. 2004)Browsing historyNone Table 2.6.Overview of implicit feedback learning in personalized systems. Hybrid Approaches There are approaches that try to combine the advantages of explicit and implicit feedback(see Table2.7).Anexamplehybridapproachistohaveimplicittechniquestominetheuser interests,butalsolettheusersviewandedittheirprofiles.Ahnetal(2007)andGauchetal. (2003) follow this approach, by obtaining the visited documents and using terms extraction and classificationtechniquesrespectivelytobuildtheprofilesfromtheimplicitinformation. Another approach is exemplified in the WAIR system (Seo and Zhang 2001), where the explicit feedbackisusedtosolvethecoldstartproblem,i.e.whentheuserisnewtothesystem,the Chapter 2 — State of the Art29 systemwilllearnmorerapidlybyaskingforexplicitrelevancefeedbacktotheuser,whenthe userhasasufficientlyrichuserprofile,thenextupdatescanbedonebymonitoringthe interactionsoftheuserwiththecontent.TheWAIRsystemusesfourdifferentsourcesof implicitfeedback:readingtime,bookmarking,scrollingandopening.TheAISsystem(Billsus andPazzani2000)useanexplicitrelevancefeedbackapproach,bothnegativeandpositive, letting the user rate the accessed documents as “relevant” or “irrelevant”. The systems’ implicit feedback include opened documents, with time of viewing as interest indicator, and clicking to seemoreinformationabouttheopeneddocuments.Theyalsoincludenegativeimplicit feedback in form of not followed documents after a query. REFERENCEIMPLICITEXPLICITPRE-PROCESSING (Ahn et al. 2007)Browsing historyUser ProfileTerm extraction (BillsusandPazzani 2000) ClickthroughRelevance FeedbackMachine learning (Gauch et al. 2003)Browsing historyUser ProfileClassification (SeoandZhang 2001) Browsing HistoryRelevance FeedbackTerm extraction Table 2.7.Overview of hybrid feedback learning in personalized systems. 2.1.3User Profile Exploitation Query Operations Query operations techniques focus on refining the need of information expressed by the user’s input query. Query operations try to “modify” the actual information need, expressed by means of the user query, in such a way that the query considers also the long-term interests of the user. Queries in information retrieval are often seen as vector in a finite space model (see Figure 2.1), the axis are formed by each term of the search space, and the values of the query vector is given by the level of importance that the query’s term has for representing the information need. The ideaofpersonalizationthroughqueryoperationisaddinginformationfromtheuserprofileto bring the query closer, viewed as a vector model, to documents that are not only relevant to the querybutalsorelevanttothepreferencesfromtheuser.Figure2.1showsthegraphical representationofthisconcept.Startingfromtheuserprofilethequeryisgoingtobe “personalized”to,themodifiedquery“comescloser”geometricallytotheuserrepresentation. 30State of the Art — Chapter 2 Insummary,wecansaythatthefinalquerywouldbethecombinationofthelocalinterestof the user (i.e. the query q) and the global interests of either the user u 1 or u 2 . 0.5 1.0 0 0.5 1.0 1 u JG q G 2 m q G 2 u JJG 1 m q G 1 1 (0.45, 0.55) 2 2 m u q q = + = JG G G 2 2 (0.80, 0.20) 2 2 m u q q = + = JJG G G (0.7, 0.3) q = G 1 (0.2, 0.8) u = JG 2 (0.9, 0.1) u = JJG Figure 2.1.Query operation example for a two dimension projection. Dependingonhowthequeryismodified,thequeryoperationsareclassifiedintoterm reweightingorqueryexpansionoperations.Theexampleaboveshowsamodificationofthe query term weights, this is what is called term reweighting. Query expansion adds new terms or information to the query, complementing the query representation with information not directly explicitbythequery.Takethisexample:AusersearchesforthetermJaguar,normallythe systemwouldnotbeabletodisambiguatethetermbetween“Jaguarthecarbrand”from “Jaguartheanimal”.Butifthesystemhassomeshortofqueryexpansiontechnique,andthe userprofilecontainstheterm“animal”,thesystemwouldbelikelytoreturndocumentswith thecorrectdisambiguationbyaddingthistermandfinallyusingasinputquery“Jaguar animal”insteadof“Jaguar”alone.Finally,asystemcanhaveacombinationofthetwo techniques, changing both the term importance weights and also adding new terms to the query. Queryexpansionisoftenusedinpersonalizedmeta-searchengines(seesection2.1.4),these searchsystemsredirectaninputquerytooneormoreexternalsearchengines,performinga mergeoraggregationofeachreturnedsearchresultlist.Termsfromtheuserprofilecanbe addedtooriginalqueriesandsenttoeachsearchengine,normallytermreweightingneedsto Chapter 2 — State of the Art31 access the internal functions of the search engine (although some, though not the most popular, allow the term reweighting as optional parameters) Relevancefeedback(RocchioandSalton1971;SaltonandBuckley1990)isaparticularcase where query reformulation takes place. This technique takes explicit relevance judgments from users, who decide which documents returned by a query are or not relevant. In personalization, ratherthanextractingtherelevantinformationfromtheinteractionswiththeuser,thesearch system uses the representation of the user profile. It is important that the query adaptation to the user profile does not make the user profile predominant, which could induce to a result set that, althoughisrelevanttotheuserpreferences;itisnotrelevanttotheoriginalquery.A generalizationofbothterm-reweightingandqueryexpansiontechniquescanbefoundin (Baeza-Yates and Ribeiro-Neto 1999). AnexampleofpurequeryexpansionsystemispresentedbyChenandKuo(2000)andChen andSycara(1998),theirapproachexploitstheterm-correlationmatrixthatrepresentstheuser profileforthequeryexpansion,byselectingthetopmostcorrelatedtermstothequeryterms. Pitkowetal.(2002)andShenetal.(2005)constructausermodelbasedonatermweighted vector, expanding each user query with the top most important (i.e. those with higher weights) termsintheusermodel.InthedesktopsearchenginepresentedbyTeevanetal.(2005),the systembuildsimplicitlya“personalindex”,buildupfromimplicitinteractionsoftheuser withintheOS’sdesktopandinteractionsoftheuserwiththesearchsystem.Termandrelated weightsarethenextractedfromthisindexandusedforthequerytermreweightingand expansionatquerytime,enablingpersonalizationforthedesktopsearch.Chiritaelal(2006) make also use of the desktop information. Theuser profile, learned from the documents in the user’sdesktopPC,areclusteredbytermfrequency.Whenevertheuserissuesaquery,thetop terms in the top clusters are added to the user query. Sugiyama et al. (2004) differentiate long- termpreferences,aspastqueriesorsessionbrowsingdataandshort-termpreferences,asthe currentsession’shistory.Oncetheuserprofileiscollected,theyapplyqueryexpansionand termreweightingusingtheclassicRocchioqueryreformulation(RocchioandSalton1971). KoutrikaandIoannidis(2005)applyqueryexpansionbutwithadifferentqueryreformulation technique.Theiruserprofilesarerepresentedassetoftermslinkedbyexpansionoperators, such as AND, OR, NOT, and replacement. For instance, following the example profile in Figure 2.2, and if a user issued the query “apple”, the final executed query, after the query expansion, wouldbe“(AppleInc.ORAppleComputer,Inc.)ANDcomputersNOTfruit”.Krulwichand Burkey (1997) use a query reformulation technique based on decision trees. They extract terms fromthecurrentlyopendocument,andapplythisinformationtothedecisiontree,which 32State of the Art — Chapter 2 represents the user profile. The output of this decision tree is a personalized query which results on documents both related to the document the user is viewing at the moment and to the user’s interests. Figure 2.2.Example of user profile based on logic term operators MartinandJose(2004)createproactivequeries,i.e.,withnoinitialuserquery,analyzingthe documentspresentedintheuserprofile,andusing theexplicitrelationsthattheuserindicated bygroupingdocumentsindifferentbundle.Thequeryisthenpresentedtotheuserwhocan choose to edit it and/or to launch it in a Web search engine. Chen and Sycara (1998) also create proactive queries, obtained from the top query terms of the user profile. Thereareotherapproachesthat,althoughsharethefinalideaofmodifyingthequerywiththe usermodel,haveothertypesofretrievalmodels.Tanetal.(2006)usealanguagemodelIR (Baeza-YatesandRibeiro-Neto1999)approach.Theyusedifferentmodels,onefortheactual queryoftheuser,andthesecondfortheuserprofilerepresentation.Thefinalqueryisa combination ofthesetwolanguagemodels,inwhichthequeryismodifiedbytheinformation givenbytheuserprofile.AlthoughthesystemproposedbyLiuetal.(2004)doesn’talterthe terms of the query it does change the information in the sense that the query is biased towards one topic or the other, by selecting topic specific search engines. The user model is a term-topic correlationmatrixthatisusedtorelatetheuser’squerytoalistoftopics.For instance,auser kindofcomputerandhardwarewillbemorelikelytohaveahighersimilaritybetweenthe query“apple”withthetopicComputerthantothetopicFood.Tequeryisthensubmitted severaltimes,inafirstmode,nocategoryisindicated,andinsubsequentmodestopinferred categories are indicated. Finally, the results are merged using a voting algorithm and taking into consideration the ranking of the categories produced by the user model. apple Apple Inc. computers fruit Apple Computer, Inc. NOT AND REPLACE OR Chapter 2 — State of the Art33 Link-Based personalization Togetherwiththeprevioussection,thesetechniquesaremoreoftenseenincommercial personalizedsearchengines.Queryoperationsareoftenappliedbecauseitfitswellin personalizedmeta-searchengines(seesection2.1.4).Link-basedpersonalizationisused because itfollows the trend of link-based ranking algorithms. These have been a huge success inthepastyears,beginningwithGoogle’sPageRank(BrinandPage1998).Link-based personalization affects directly to the document ranking techniques. These are based on the idea that “a page has a high rank if the sum of the ranks of its backlinks is high”, the rank of every document makes it climb in the result set, so pages that are considered “important” by the page rank algorithm are considered more relevant to a user. One main advantage of these approaches is that the system does not have to take into consideratio the content of the document, only the hyperlinks inherent in any Web page. Pagerankvaluesareoftencomputedbywebcrawlersthatstartfromaninitialpage,anddoa random walking through the links of the page and the subsequent links of the pages pointed by thelinks.Ingeneral,link-basedpersonalizedalgorithmsaremodificationsofGoogle’s PageRank(Haveliwala2002;JehandWidom2003) ortheHITSauthorityandhubalgorithm (Chiritaetal.2003;TanudjajaandMui2002).However,therearemanywaystointroduce personalized search in page rank algorithms: ƒTopicsensitivepagerank.Adifferentpagerankvalueiscomputedforeverytopic,to capture more accurately the notion of importance within the category. Thus being able to personalize the final results with the user’s desired topics by combining the topic-bias pageranks.Thetopicinformationcanbeextractedfromacategoryhierarchy (Haveliwala 2002; Tanudjaja and Mui 2002), using hard relations with already existent Web categories like ODP, or starting from a set of documents considered representative fortheuserinterests(Chakrabartietal.1999),usingaclassifiertorelateuser representative documents, the crawled documents and the query to the set of predefined topics. ƒRelevant Documents. A set of relevant documents is used to alter the normal page rank algorithm and give a higher rank value to documents related (through links) to this set (Chirita et al. 2003; Jeh and Widom 2003). Normally this set of relevant documents is extractedfromthebookmarksoftheuser,whichareconsideredagoodsourceof interest. 34State of the Art — Chapter 2 Personalized alterations of the page rank algorithms are mostly easy to develop, but there’s still a tradeoff on scalability, as computing these values requires high computational resources, and is impossible nowadays to compute a full personal page rank value for every user (this would be withnodoubt,theidealusecase),whichwastheoriginalBrinandPagesuggestion.Some solutionstothishavebeenthecalculationofonlyasmallsetofvaluesforsmallsetoftopics (Chakrabarti et al. 1999; Haveliwala 2002), or more efficient algorithms where partial page rank vectors are computed, allowing the combination of these for a final personalized vector (Jeh and Widom 2003). Document weighting Documentweightingalgorithmsmodifythefinalrankingoftheresultsetdocuments.Most searchenginescomputearankingvalueofrelevancebetweenthedocumentandtheneedof information(e.g.theuser’squery),notethatthisrankingcanbethecombinationofseveral scores,butalltheseareuserindependent.Apersonalizedsearchenginecanthencomputea personalized ranking value for every document in the result set. The benefits of this approach is that this value has only to be computed for the returned top result set of documents. Themain drawback is that this value has to be computed at query time. This algorithm is also suitable for metasearchengines(Kerschbergetal.2001),ascommonlytheuser-dependentalgorithmcan focusonasmallquantityofthetopreturneddocuments,beingabletocomputeapersonalize score at real-time, by accessing the document’s content or even just using the provided snippet summaries. Theuser-dependentscoreusuallycomesfromadocument-usersimilarityfunction,basedon term-frequencysimilarity(MicarelliandSciarrone2004),classificationandclustering (Middleton et al. 2003), Bayesian approaches (Zigoris and Zhang 2006), etc. Figure 2.3 shows the typical flow chart of this type of personalized Information Retrieval systems. Chapter 2 — State of the Art35 User Profile Search space 1 Search engine 1 Search engine 2 … Search engine n Search space 2 Search space n Query … Search engine Search space 1 MetaSearch engine Meta‐search engine Standalone search engine Search results Profile Similarity Personalized results Figure 2.3.Typical schema of document weighting on personalized retrieval systems. The most common application of this user-document similarity score is the combination of the user-independentscorewiththepersonalization score,resultingonpersonalizedresultreorder. Systemsthatuseclassificationtechniquescanalsoclustertheresultsandpresentfirstthose clustersthathaveahighersimilaritytotheprofile.Finally,theuser-documentsimilarityscore canalsobeusedtoaidthenavigationandbrowsingactionsoftheuser.Inthisinformation retrievalparadigm,theusercannavigatethesystem’scorpus,whilethesystemcansuggest interestinglinksoradaptthebrowsingoptionsinapersonalizedway.Table2.8classifiesthe document weighting techniques by what is the final use of the score value and which is the user- document similarity measure. REFERENCESCORE USE SIMILARITY MEASURE (Ahn et al. 2007)Result reorderTerm-vector (Aroyo et al. 2007)Result FilteringProperty (Asnicar and Tasso 1997)NavigationMachine learning (Billsus and Pazzani 2000)Result reorderMachine learning (Chen et al. 2002)Result reorderTopic-vector similarity (Chirita et al. 2005)Result reorderTopic similarity 36State of the Art — Chapter 2 (Chen and Sycara 1998)Result reorderTerm-vector similarity (Dou et al. 2007)Result reorderUser/Topic similarity (Gauch et al. 2003)Result reorderTerm-vector similarity (Kerschberg et al. 2001)Result reorderSearch engine/popularity/topics (Lang 1995)ClusteringClassification (Lieberman 1995)NavigationTerm-vector similarity (Ma et al. 2007)Result reorderTopic similarity (Micarelli and Sciarrone 2004)Result reorderTerm frequency (Middleton et al. 2003)ClusteringClassification (Noll and Meinel 2007)Result reorderTerm-vector similarity (Pitkow et al. 2002)Result reorderTerm-vector similarity (Sakagami and Kamba 1997)NavigationTerm-vector similarity (Seo and Zhang 2001)Result reorderTerm-vector (Sieg et al. 2007)Result reorderTopic (Speretta and Gauch 2005)Result reorderTopic (Sun et al. 2005)Result reorder User-query-document (Widyantoro et al. 1997)Result reorderTerm-vector (Yuen et al. 2004)ClusteringClassification (Zigoris and Zhang 2006)Result reorderMachine learning Table 2.8.Classification of document weighting exploitation in personalized systems. •ResultReorder Thetopnreturneddocumentsbythequeryarereorderedaccordingtotherelevanceofthese documents to the user profile. The underlying idea is improving the ranking of documents that relevanttotheuser,butalsorelevanttothequery.Unlikequeryoperations(seeabove2.1.1), results reorder does not change the query information, thus guaranteeing the query relevance. AnexampleofresultreorderingistheHUMOSsystem(MicarelliandSciarrone2004)which modifies the results of the query returned by a popular search engine. For each document in the resultset,itcomputesascoreconsideringonlythedocumentandtheuserprofile,presenting Chapter 2 — State of the Art37 firstthehigherrankeddocuments.Eachuserprofilecontainsasetofweightedstereotypes, given by the interests for the domain represented by the stereotype. Each domain of interest has associatedatopicandasetoftermsrelatedtothedomain.Adocumentisfinallyrankedby usingatermfrequencysimilarity,calculatedbyascalarproductbetweentheoccurrencesofa term on the user profile and on the document, using the weight of the slot the term belongs to. They also introduce the concept of the Term Data Base (TDB), which is a set of terms related to thedomainofinterestoftheuser,that,inalowerdegree,aretakenintoconsiderationevenif they not belong to the user profile. Zigoris and Zhang (2006) use a hierarchical Bayesian network representation of the user model to reorder the search results. The main advantage is that models form other users can be used to solve the cold start problem, where the system does not have any information about a new user to the system. The AIS system (Billsus and Pazzani 2000) uses a naïve Bayesian classifier over the user model, using as features the terms in the user profile, document’s score is computed by the predictor value given by the classifier. Whentheuserprofilesarerepresentedasasetoftaxonomicconcepts,itiscommontousea topic-documentsimilaritytocomputethepersonalizationscore(Chiritaetal.2005;Siegetal. 2007).Thesimilarityscoreiscalculatedbymeansofadistancemeasure(e.g.ataxonomic distance)betweenthetopicsassociatedtothedocumentsandthetopicsintheuserprofile. Vector similarity between the user representation and the document representation is one of the most common algorithms for computing the personalization score, this vector similarity is often calculated by the cosine value of the two vector representations. In the case of taxonomy-based systems(Chenetal.2002;Gauchetal.2003;Maetal.2007;SperettaandGauch2005),the similarity value iscomputed between the weighted topics representing the interests of the user andthetopicsassociatedtoeachsearchresult.Pitkowetal.(2002)computethevector similaritybetweenthetermsassociatedtothetopicsoftheuserprofileandthetitleand metadata of the returned documents. Term-based recommender systems (Ahn et al. 2007; Chen andSycara1998;SeoandZhang2001;Widyantoroetal.1997)computethesamevector similarity value, but using the term vector representation of the document content. Collaborative filtering methods commonly perform a result reorder, combining the user profile with other user profiles (usually with a user-user similarity measure). Sun et al. (2005) and Dou etal.(2007)minethequerylogclickthroughinformationtoperformacollaborative personalizationoftheresultssetrankinghigherdocumentsthatsimilarusershadclicked previously in similar queries. Sun et al (2005) applies a dimensional reduction preprocessing to the clickthrough to find latent semantic links between users, queries and documents, in order to 38State of the Art — Chapter 2 weight which documents could be interesting for the user, this preprocessed user profile already givesscoresforthepreferreddocumentsgivenaquery.Douetal.(2007)complementthis similarity measure with a user-topic document-topic similarity value. The Sensee TV framework by Aroyo et al. (2007) use the ontological properties to boost results that fulfill specific properties defined by the user. For instance, let us suppose that a user has a preferenceformovieswithintheactiongenreandproducedbyEnglishinvestors.Iftheuser issues a query “Friday” to search for programs that will be aired the next Friday, programs with relations to the action genre and England location would be shown first to the user. Metasearchenginescombinationmethodscanbepersonalizedbydifferentcriterions.In (Kerschberg et al. 2001) the users can express their preference for a given search engine, for a setoftopicsorforthedesiredpopularityofthesearchresults.Thefinalrelevancemeasure wouldbethecombinationofthispersonalratingsappliedtoeachofthelistingsofthesearch engines. •Result Clustering Query results are clustered in a set of categories, presenting first the categories more relevant to the user (Lang 1995; Middleton et al. 2003; Yuen et al. 2004) .The algorithm 1) takes the result setforaquery,2)obtainsthesetofcategoriesrelatedtothedocumentsintheresultset,3) reorders the set of categories according to the user profile and 4) presents the top n documents foreachcategory.Usuallypresentingthetopthreecategoriesineachpagewithfour-five documents for each category gives a good performance. The GUI has to allow the user select a concrete category to see all the documents of the result set related to this category. •Navigation Support Navigationsupportaffectshowtheuserbrowsesornavigatesthroughthesystem’scontent. This can be done by either suggesting links to follow next (Asnicar and Tasso 1997; Lieberman 1995)orbyadaptingthelayoutofinformationtotheuser(SakagamiandKamba1997). Lieberman(1995)assisttheuserWebbrowsingsessionbyusingthepersonalizationscoreon thelinksofthecurrentopeneddocument,thoselinkswithhigherscoresaresuggestedtothe user.AsnicarandTasso(1997)classifyeachlinkinthedocumentasinterestingornottothe user, creating a final reordered list of links by relevance. The links of the linked documents are alsotakenintoconsideration,havinganiterativealgorithmresemblingtoalocalpersonalized webcrawler.TheAnatagonomysystem(SakagamiandKamba1997)introducesawayof personalizinganewsportal.Thepersonalizationscoreiscomputedforrecentnewsand, depending on this score, a personalize layout of a first page of news is presented to the user. Chapter 2 — State of the Art39 2.1.4Personalization in Working Applications Thenumberofsearchengineswithpersonalizationcapabilitieshasgrownenormouslyinthe past years, from social search engines, were users can suggest collaboratively which are the best resultsforagivenquery,toverticalsearchengines,wereuserscancustomizeadomain specificsearchengine.Thereisanincominginterestbycommercialsearchenginecompanies suchasYahoo,MicrosoftorGoogle,butthelatterhasbeenthefirsttoshowtruly personalizationcapabilities.Thefollowingisalistofthosethathavemorepropertiesin common with our proposed approach. •Google Personal Google’s personalized search(currently discontinued), based in topic Web categories (from the OpenDirectoryProject),manuallyselectedbytheuser.Thepersonalizationonlyaffectedthe searchresultsrelatedtoacategoryselectedbytheuser.Theusercouldchangethedegreeof personalization by interacting with a slider, which dynamically reorder the first ten results. •Google Co-op GoogleCo-opallowsthecreationofsharedand personalizedsearchenginesinthesensethat users are able to tag web pages and filter results with this new metadata. Tags are not meant to be a full description of the content of the annotated Web pages. It is more oriented to what could becalled“functionalitytags”(e.g.taggingapage asareviewforthecustomsearchengineof digitalcameras).Domainsandkeywordscanalsobeaddedtomodifysearchrankingand expand the user’s query. •iGoogle Recently,GooglechangethenameofthepersonalizedhomepagetoiGoogle 9 ,stressingthe personalizationcapabilities.Althoughwecannotbereallysurewhataretheconcreteapplied techniquesspecificallyonGooglesearchengine,andthistechnologiesarestillincipient,two US patents on personalized search have very been filed by Google in recent years (Badros and Lawrence2005;Zamiretal.2005).Thesepatentsdescribetechniquesforpersonalizedsearch resultsandrankings,usingsearchhistory,bookmarks,ratings,annotations,andinteractions withreturneddocumentsasasourceofevidenceofuserinterests.Themostrecentpatent specificallymentions"usersearchqueryhistory,documentsreturnedinthesearchresults, documents visited in the search results, anchor text of the documents, topics of the documents, 9 http://www.igoogle.com 40State of the Art — Chapter 2 outbound links of the documents, click through rate, format of documents, time spent looking at document, timespent scrolling a document, whether a document is printed/bookmarked/saved, repeatvisits,browsingpattern,groupsofindividualswithsimilarprofile,andusersubmitted information". Google patents considers explicit user profiles, including a list of weighted terms, a list of weighted categories, and a list of weighted URLs, obtained through the analysis of the aforementionedinformation.Techniquesforsharinginterestsamongusers,andcommunity building based on common interests, are also described. As an optional part of user profiles, the patent mentions "demographic and geographic information associated with the user, such as the user's age or age range, educational level or range, income level or range, language preferences, maritalstatus,geographiclocation(e.g.,thecity,stateandcountryinwhichtheuserresides, andpossiblyalsoincludingadditionalinformationsuchasstreetaddress,zipcode,and telephone area code), cultural background or preferences, or any subset of these". •Eurekster Althoughismostlyorientedto“searchgroups”.Thissearchengine 10 includestheabilityto buildexplicitlyauserprofilebymeansofterms,documentsanddomains.Itisametasearch enginebasedonYahoo!searchengine,soonlyqueryexpansionanddomainfocusedsearches can be performed. Users can also mark which search result they think are the most relevant for a given query, so that similar queries can make use of this information. •Entopia Knowledge Bus EntopiaisaKnowledgeManagementcompanywhichsoldasearchenginenamedk-bus, receiving many awards and being selected as the best search engine technology in 2003 by the Software & Information Industry Association. This search engine is promoted to provide highly personalized information retrieval. In order to rank the answers to a query, the engine takes into accounttheexpertiseleveloftheauthorsofthecontentsreturnedbythesearch,andthe expertise level of the users who sent the query.Those expertise levels arecomputed by taking intoaccountpreviousinteractionsofdifferentkindsbetweentheauthorandtheuseronsome contents. •Verity K2 The latest version of the K2 Enterprise Solution of Verity, one of the leading companies in the searchenginemarketsforbusinesses,includesmanypersonalizationfeaturestosortandrank answers to a query. To build users profiles, K2 tracks all the viewing, searching, and browsing 10 http://www.eurekster.com Chapter 2 — State of the Art41 activitiesofuserswiththesystem.Profilescanbebootstrappedfromdifferentsourcesof informationincludingauthoreddocuments,publice-mailforumsintheorganization,CRM systems, and Web server logs. A user can provide feedback not only to documents but also to a recommendation coming from a specific user, thus reinforcing the value of a document and also the relationship between both users . •MyYahoo The personalization features of yahoo personal search engineare still rather simple 11 . Users are ableto“ban”URLtoappearinsearchresults,ortosavepagestoa“personalWeb”thatwill give a higher priority on these pages once they appear in a search result set. 2.2Context Modeling for Information retrieval Oneofthekeydriversanddevelopmentstowardscreatingpersonalizedsolutionsthatsupport proactiveandcontext-sensitivesystemshasbeentheresultsfromresearchworkin personalization systems. The main indication derived from these results showed that it was very difficult to create generic personalization solutions, without in general having a large knowledge about the particular problem being solved. In order to address some of the limitations of classic personalizationsystems,researchershavelookedtothenewemergingareadefinedbytheso- called context-aware applications and systems (Abowd et al. 1997; Brown et al. 1997). The notion of context-awareness has been long acknowledged as being of key importance in a widevarietyoffields,suchasmobileandpervasivecomputing(Heeretal.2003), computational linguistics (Finkelstein et al. 2001), automatic image analysis (Dasiopoulou et al. 2005), or information retrieval (Bharat 2000; Haveliwala 2002; Kim and Chan 2003), to name a few. The definitions of context are varied, from the surrounding objects within an image, to the physicallocationofthesystem'suser.Thedefinitionandtreatmentofcontextvaries significantly depending on the application of study (Edmonds 1999). Contextininformationretrievalhasalsoawidemeaning, goingfromsurroundingelementsin anXMLretrievalapplication(Arvolaetal.2005),recentselecteditemsorpurchaseson proactive information systems (Billsus et al. 2005), broadcast news text for query-less systems (Henzingeretal.2003),recentlyaccesseddocuments(BauerandLeake2001),visitedWeb pages (Sugiyama et al. 2004), past queries and clickthrough data (Bharat 2000; Dou et al. 2007; 11 http://my.yahoo.com 42State of the Art — Chapter 2 Shenetal.2005b),textsurroundingaquery(Finkelsteinetal.2001;Kraftetal.2006),text highlighted by a user (Finkelstein et al. 2001), etc… Context-aware systems can be classified by 1)theconceptthesystemhasforcontext,2)howthecontextisacquired,3)howthecontext information is represented and 4) how the context representation is used to adapt the system. Oneofthemostimportantpartsofanycontext-awaresystemisthecontextacquisition.Note thatthisisconceptuallydifferenttoprofilelearningtechniques.Ontheonehand,context acquisitionaimstodiscovertheshort-terminterests(orlocalinterests)oftheuser(Douetal. 2007;Shenetal.2005b;Sugiyamaetal.2004),wheretheshort-termprofileinformationis usuallydisposedoncetheuser'ssessionisended.Ontheotherhand,userprofilelearning techniques do cause a much great impact on the overall performance of the retrieval system, as the mined preferences are intended to be part of the user profile during multiple sessions. Onesimplesolutionforcontextacquisitionistheapplicationofexplicitfeedbacktechniques, likerelevancefeedback(RocchioandSalton1971;SaltonandBuckley1990).Relevance feedbackbuildsupacontextrepresentationthroughanexplicitinteractionwiththeuser.Ina relevance feedback session: 1)The user makes a query. 2)The IR system launches the query and shows the result set of documents. 3)Theuserselectstheresultsthatconsidersrelevantfromthetopndocumentsofthe result set. 4)TheIRsystemobtainsinformationfromtherelevantdocuments,operateswiththe query and returns to 2). Relevancefeedbackhasbeenproventoimprovetheretrievalperformance.However,the effectivenessofrelevancefeedbackisconsideredtobelimitedinrealsystems,basically becauseusersareoftenreluctanttoprovidesuchinformation(Shenetal.2005b),this information is needed by the system in every search session, asking for a greater effort from the userthanexplicitfeedbacktechniquesinpersonalization.Forthisreason,implicitfeedbackis widelychosenamongcontext-awareretrievalsystems(CampbellandvanRijsbergen1996; KellyandTeevan2003;Sugiyamaetal.2004;Whiteetal.2006;WhiteandKelly2006).A completeclassificationofcontextualapproachesrelatedtoIRsystemscanbefoundinTable 2.7. Chapter 2 — State of the Art43 REFERENCECONCEPTACQUISITIONREPRESENTATIONEXPLOITATION (Akrivasetal. 2002) Surrounding text Term thesaurusConcepts Query operations (Bauer and Leake 2001) Desktop Term extractionTerm VectorContext revisit (Bharat 2000)Clickthrough ClickthroughUsage HistoryContext revisit (Billsusand Pazzani 2000) ClickthroughTerm extractionTerm VectorDocument weighting (Budzikand Hammond 1999) DesktopTerm extractionTerm VectorProactive queries (Budzikand Hammond 2000) DesktopText miningTerm VectorQuery operation (Dou et al. 2007)ClickthroughClassificationTopic vectorQuery operation (Dumaisetal. 2003) DesktopTerm frequencyInverted indexContext revisit (Finkelsteinetal. 2001) Surrounding context Term extractionTerm vectorQuery operation (Haveliwala 2002) Surrounding context ClassificationTopic VectorDocument weighting (Kraft et al. 2006) Surrounding context Entity extractionTerm VectorQuery operation (Leroyetal. 2003) Clickthrough Term extractionTerms VectorQuery operation (Melucci 2005)Location NoneVector baseQuery operation (Rhodesand Maes 2000) DesktopTerm extractionTernsDocument weighting (Shenetal. 2005a) ClickthroughLanguage ModelTerms, probabilityQuery operation (Shenetal. 2005b) ClickthroughTerm extraction query similarity Term vectorDocument weighting 44State of the Art — Chapter 2 (Sugiyamaetal. 2004) Clickthrough Term extractionTerm vectorDocument weighting (Vishnu 2005)DesktopClassificationTopic vectorDocument weighting (WhiteandKelly 2006) ClickthroughOstensive modelTerm VectorQuery operation Table 2.9.Overview of context-aware retrieval systems 2.2.1Concept of Context Evenifwenarrowourstudytoinformationretrievalagents,there’sstillavarietyof interpretationsoftheconceptofcontext.Thisinterpretationincludeswhatarethesourcesof context, how can we extract this contextual information and, more importantly, how can we use this information to benefit of the user. Context’s interpretation can go to something as simple as the query that the user just input into the retrieval system (Akrivas et al. 2002) to all the current openeddesktopapplicationsandcurrentdesktopinteractions(Dumaisetal.2003),including ubiquitous properties such as location or time (Melucci 2005). Clickthrough information Clickthroughdataisoneofthemostusedsourcesforcontextacquisition.Differentlyfrom personalizedsystems,wherehistoricalclickthrough dataisminedinordertoconstructalong- termuserprofile,inthecaseofcontextualretrievalsystemstheclickthroughdataisnormally boundtoasinglesessionofinteractionwiththeretrievalsystem.OneexceptionistheAIS system(BillsusandPazzani2000),whichdefinesthecontextualinformationasthelast100 interactionswiththesystem.Clickthroughdata,inthecaseofcontextualretrievalsystems,is mined as a way of obtaining the short-interests of the user, i.e. what is the task that the user is tryingtoachieveatthemoment.Knowingthisinformationcanenablethesystemtoadaptthe retrievalmechanisminawaythatthecurrentinterestsoftheuseraresatisfiedtoagreater extent. Similarly to personalization systems (see implicit feedback on section 2.1.2), when interpreting clickthroughdatathesystemcanchoosetotakeintoconsiderationboththequeryandthe posteriorinteractionswiththedocuments(Bharat2000;Douetal.2007;Shenetal.2005a; Shenetal.2005b),orlimitthesourceofcontexttojusttheopeneddocuments(Bauerand Leake 2001; Billsus and Pazzani 2000; Sugiyama et al. 2004; White and Kelly 2006). Chapter 2 — State of the Art45 Desktop information Anotherconceptofcontextistheinformationfromtheuser’sdesktop.Forinstance,opened windowsanddocuments,sentemails,etc…Oneofthemainrestrictionsondesktopbased contexts is that, differently to clickthrough data, which can be harvested from any Web search application, desktop actions or information has to be obtained froma local application. On the other hand, this system can obtain this information without need of previous interaction with the system,andcaneffectivelysolvethe“coldstart”problem,i.e.inclickthroughbasedsystems, whenstartingasession,thesystemshasnotjetobtainedanycontextualclueoftheuser intentions,whereasindesktopbasedsystems,theusercouldhavepreviouslyinteractedwith other applications. Budzik and Hammond (1999, 2000) extract the contextual information from open applications, suchasdocumentwritingorwebbrowsers.Vishnu(2005)extendsthiscontexttomessaging application. The Stuff I’ve Seen system(Dumais et al. 2003) not only takes into consideration openedWebdocumentsandwordapplications,butalsouseinformationonemails,weband previously created documents. The Just-in-time system (Rhodes and Maes 2000) also take into accountWordandemailapplications,buttheyonlyconsiderthelast500editedwordsbythe user. The WordSieve system (Bauer and Leake 2001) monitors short-term sequence of access to documents, building up a task representation of the user’s context. Surrounding query context Other interpretation of the user’s context is the information that could have surrounded the input of the query. This information can give clues on what produced the need of information of the user, interpreting what the user was viewing at the moment of generating the query. This could besimilartothedesktopbasedsystems,whichtrytogatherwhatwastheuserviewingatthe moment of executing the query, but in this case the information is finer grained, as is normally theproperuserwhoindicateswhichisthesurroundinginformationthatmotivatedthequery. For instance,the IntelliZap system (Finkelstein et al. 2001) allows the user to send a query over ahighlightedterm,takingthesurroundingtextasthecontextofthequery.Thisofcourse compelstheusertodoanextraeffortinordertoprovidethisinformation,giventhatthis contextualinformationcanindeedbefoundonthedocumentthattheuserwasviewingatthe moment. Kraft el al. (2006) try to combine the benefits of both desktop based and surrounding queryapproaches,byalsotakingintoconsiderationtheinformationofopendocuments. Haveliwala (Haveliwala 2002) also gives also importance to the surrounding text of the query, along with possible past queries. Akrivas et al. (2002) claim that the context of a query can be 46State of the Art — Chapter 2 representedwiththeverysamequeryterms.Theauthors’hypothesisisthateachtermofa querycanbedisambiguatedbyanalyzingtherestofthequeryterms.Forinstance, disambiguating the term 'element' with the chemistry meaning if appears among other chemistry related terms or assigning an XML meaning if appears with terms related to the XML field. It is unclearthatthistechniquewillbealwayssuccessful,asabigpercentageofuser'squeries contain no more than one or two terms (Jansen et al. 1998). 2.2.2Context Acquisition and Representation Context acquisition and representation approaches are similar to implicit learning approaches in personalizationsystems(seesection2.1.2).Theseapproachesareingeneralsimplerthan personalizationprofilingapproaches.Themaincauseofthisisthatcontextacquisition techniquesarefocusednormallywithinasinglesession.Userprofilingapproacheshaveto couple with mining large amounts of information, having to be able to discern which concepts appearbecauseofbeingofalong-terminterestsfortheuser,whichappearbecauseofa periodicalinterest(e.g.summerholidays)andwhichappearbecausesomerandomfactor(e.g. searchingforapresentformycousinsbirthday).Forinstance,AnandandMobasher(2007) illustratedthisproblemwithanexamplewhereausersearchesintoAmazon.comfora pregnancy book for a girlfriend, just to have the system learn that as an “interest” and to receive futurerecommendationrelatedtothepregnancytopic.Ontheotherhand,acontextualsystem wouldhavecorrectlyacquirethepregnancytopiccorrectly,and,evenifthesystemfailsto apprehendtheusercontext,errorsoncontextualsystemdon’thavesuchanimpact,asthe contextual profiles are usually discarded at the end of the session. Themostcommonapproachforcontextacquisitionandrepresentationfollowsthesesteps:1) depending on the conceptof context of the system, obtain the setof documents and/or queries whichrepresentthecurrentcontext,2)extractthemostimportantorrepresentativetermsfor these items, optionally weighted by some statistical metric (e.g. tf.idf)and 3) add these term to thecontextualprofile.Theweightingmetriccanvaryfromnotweightingthetermsatall (Rhodes and Maes 2000) to a simple term frequency count(Leroy et al. 2003; Sugiyama et al. 2004)or a combination of the term frequency with the inverse document frequency of the term (i.e.tf.idf)(BauerandLeake2001;BillsusandPazzani2000;Dumaisetal.2003;Shenetal. 2005b).Othersystemsapplythesametechniquewithdifferentheuristics.Budzikand Hammond (1999, 2000) give more importance to terms that are at the beginning of a document orthatareemphasized.Finkelsteinetal.(2001)applyclusteringalgorithmtofindthemost importantterms.WhiteandKelly(2006)usethewpqapproachforqueryexpansionterm Chapter 2 — State of the Art47 weighting (Robertson 1990). This weighting function is based on “the probabilistic distribution oftermsinrelevantandnon-relevantdocuments”.Whentheusersubmitsaqueryorvisitsa document,itisconsideredasanewinteractionofthesession,andthetermsappearingonthe queryorthevisiteddocumentareconsideredastheobservableitemsfortheinteraction.The contextisthusdynamicandchangeswitheveryinteractionoftheuser,givenmoreweightto thosetermsthathavebeenobservedmorefrequentlyonpastinteractions,andoninteractions closertothecurrentone.Theyexploitedtheviewingtimeasanimplicitrelevanceindicator, usingatimethresholdvaluetodetermineifadocumentwasrelevantornot.Interestingly, thresholdvaluessetaccordingwiththetaskthattheuserhadtoperformprovedtoperform betterthanthresholdvaluessetaccordinglytoeachuser,whichsomehowprovesthatthetask information (i.e. the context) can be even more valuable than the overall information of the user. Finally, Shen et al (2005a) use a language model approach to weight the extracted terms. Otherapproachesforcontextacquisitionincludeclassificationtechniques(Douetal.2007) (Haveliwala2002;Vishnu2005),wheretheextractedcontentisclassifiedintoasetoftopics, storing this weighted topics as thecontextual profile. Some approaches use semantic relations, Akrivasetal.(2002)useatermthesaurustoexpanddesemanticoftheextractedtermsinthe query,andrepresentsthecontextastheconceptsinthisthesaurus.Kraftetal.(2006)apply entityrecognitiontechniquesandstoretheseentitiesastheusercontext.Finally,some approachesdonotapplyanytypeofacquisitionapproach,suchasBharat(2002)whoonly storestheclickthroughdataandMelucci(2005),whichexampleofubiquitoussystemuses sensor data such as time or location. 2.2.3Context Exploitation Context exploitation share similarities with exploiting a user profile (see section 2.1.3). Similar topersonalizationapproaches,somecontextualapproachesexploitthecontextualprofileto performseveralqueryoperations,inwhichtheuserqueryismodified(Kraftetal.2006), expanded(Akrivasetal.2002),orcreatedasacontext-relatedproactivequery(Budzikand Hammond1999),i.e.withoutinteractionfromtheuser.Therearealsodocumentweighting techniques in which the results are reordered based on a document-context similarity (Sugiyama et al. 2004). However, there are also techniques that are only seen in contextual systems, such as saving past context in order to let the user revisit in the future this information (Bharat 2000). Query operations Queryoperationsarethemostcommontechniquesincontextualsystems.Normallythe contextualinformationisexploitedasarelevancefeedbackmodel,andappliedtothecurrent 48State of the Art — Chapter 2 user’squery(Akrivasetal.2002;BudzikandHammond2000;Finkelsteinetal.2001; Sugiyamaetal.2004;WhiteandKelly2006).Finkelsteinetal.(2001)combinethisclassic queryexpansiontechniquewithaclassificationapproach:iftheexpandedquerycanbe classified into a topic, then a specific search engine can be used for retrieval. Leroy et al. (2003) compare this classic approach with a genetic-based exploitation approach, and also compare the impactofusingnegativefeedbackasasourceofa“negative”contextrepresentation.Their conclusionswerethat"thegeneticalgorithmwithnegativeexpansionshouldbeactivatedfor lowachieversbutnotforhighachievers;relevancefeedbackshouldbeactivatedforhigh achievers”.Heretheterm‘highachievers’meansusersthatperformbetterontypicalretrieval tasks.Kraftetal.(2006)alsocomplementtheclassicqueryexpansionapproachwithan Iterative Filtering Meta-search (IFM), which basically is the generation of multiple queries from thecontextvectorandafinalfusionofeachofthesearchresultsreturnedbytopicspecific search engines. Different models of query expansion are, for instance, the system presented by Melucci (2005), based on classic vector spacemodels (Salton and McGill 1986),in which the query operation technique is based on adjusting the vector space model according to the context information, in thiscasethelaunchedquerywillbemodifiedbythischangeonthevectorbase,thusbeing modified ultimately by the context information. Sen et al. (2005a) use the context representation in order to change the current language model representation of the user’s query, by modifying thedocumentpriorwiththecontext.Thisupdatesthehistoricalmodel,whichrepresentsthe historical contextual information (represented similarly to Tan et al. (2006)) with the new short- term contextual information. Context revisit Some systems use the context as a way of letting the user “recover” past contextual information, whichcouldbeusefultothecurrenttaskathand.Barhat(2000)storestheclickthrough informationfrompastinteractionsoftheuserwiththeretrievalsystem.Thegoalistoletthe user inspect all the visited documents from a past query, with useful information as time spent viewingthedocument.Thusallowingtheusertorevisitingwhatcouldhavefoundrelevantin pastcontextssimilartothecurrent.BauerandLeake(2001)applythecurrentcontextonan informationagent,capabletoreturnresourcesthatwereusedinpastsimilarcontexts,orto proactivelyretrieveWebresults.TheStuffI’veSeensystem(Dumaisetal.2003)allowsthe user can to easily revisit web pages, sent emails, or created documents by exploiting an inverted index. Chapter 2 — State of the Art49 Document weighting Indocumentweightingapproaches,acontextualizedscoreiscomputedforagivendocument. Normallythiscontextualizedscoreisgivenbyacontext-documentsimilarityvalue.Again, classicvectorspacemodelsarealsousedtocomputethissimilarityvalue,bycalculatingthe cosinevaluebetweenthetermvectorrepresentationofthecurrentcontextandthedocument (Rhodes and Maes 2000; Shen et al. 2005b). Shen et al (2005b) only apply this similarity value if the current query is similar enough to the current context. The novelty of this work is that the contextualization effects are not only applied to query search results, the contextualization effect is rather more interactive, as the results are reordered, or more results are added through query expansion, whenever the users returns to the result list after clicking through a result item. Incontextclassificationapproachesthissimilarityvalueisgivenbythevectorsimilarity between the topic vector representation of the context and the topic vector representation of the document(Douetal.2007;Vishnu2005).Haveliwala(2002)usesalsoatopic-based representation,althoughinthiscasethedocumentweightingscomefromselectingaspecific topic-biasedscore. Chapter 3 3 A Personalized Information Retrieval Model Based on Semantic Knowledge Personalizedretrievalwidensthenotionofinformationneedtocompriseimplicituserneeds, notdirectlyconveyedbytheuserintermsofexplicitinformationrequests(Micarelliand Sciarrone2004).Again,thisinvolvesmodelingandcapturingsuchuserinterests,andrelating themtocontentsemanticsinordertopredicttherelevanceofcontentobjects,consideringnot onlyaspecificuserrequestbuttheoverallneedsoftheuser.Whenitcomestothe representationofsemantics(todescribecontent,userinterest,oruserrequests),ontologies provideahighlyexpressivegroundfordescribingunitsofmeaningandarichvarietyof interrelationsamongthem.Ontologiesachieveareductionofambiguity,andbringpowerful inferenceschemesforreasoningandquerying.Notsurprisingly,thereisagrowingbodyof literature in the last few years that studies the use of ontologies to improve the effectiveness of information retrieval (Guha et al. 2003; Kiryakov et al. 2004; Stojanovic et al. 2003; Vallet et al. 2005)and personalized search (Gauch et al. 2003; Speretta and Gauch 2005). However, past work that claims the use of ontologies (Gauch et al. 2003; Kerschberg et al. 2001; Speretta and Gauch2005)fortheuserprofilerepresentation,doesnotexploitthevarietyofinterrelations between concepts, but only the taxonomic relations, losing the inference capabilities, which will prove critical for the approach here proposed. Our proposed personalization framework is set up in such a way that the models wholly benefit from the ontology-based grounding. In particular, the formal semantics are exploited to improve the reliability of personalization. Personalizationcanindeedenhancethesubjectiveperformanceofretrieval,asperceivedby users,andisthereforeadesirablefeatureinmanysituations,butitcaneasilybeperceivedas erratic and obtrusive if not handled adequately. Two key aspects to avoid such pitfalls are a) to appropriatelymanagetheinevitableriskoferrorderivedfromtheuncertaintyofaformal representationofusers’interests,andb)tocorrectlyidentifythesituationswhereitis,oritis not appropriate to personalize, and to what extent (Castells et al. 2005). Asdiscussedinsection2.1,personalizedIRsystemscanbedistinguishinthreemain components:1) the user profile representation, which represents the long-term preferencesand 52Personalized Information Retrieval — Chapter 3 interests of the user, 2) the user profile acquisition, which obtains and infers the user profile and 3)theuserprofileexploitation,whichadaptstheretrievalsystemtotheuserprofile.Broadly speaking,informationretrievaldealswithmodelinginformationneeds,contentsemantics,and the relation between them (Salton and McGill 1986). The personalization system here presented buildsandexploitsanexplicitawarenessof(meta)informationabouttheuser.Theacquisition of this information is out of scope of this work, the user profile could be either directly provided bytheuserorimplicitlyevidencedalongthehistoryofhis/heractionsandfeedbacks(see section 2.1.2). 3.1Ontology-based User Profile Representation Thepersonalizationsystemmakesuseofconceptualuserprofiles(asopposedtoe.g.setsof preferreddocumentsorkeywords),whereuserpreferencesarerepresentedasavectorof weights (numbers from -1 to 1), corresponding to the intensity of user interest for each concept intheontology,beingnegativevaluesindicativeofadislikeforthatconcept.Comparingthe metadata of items, and the preferred concepts in a user profile, the system predicts how the user maylikeanitem,measuredasavaluein[-1,1].Although,asstated,negativevalues(i.e. allowing the representation of dislikes) are supported by the presented system, these have to be treatedcautiously,especiallywhenthesevalueshavebeenimplicitlygeneratedbytheprofile acquisition module of the system. As an example, negative preferences could be used as a way of parental control, for instance indicating a negative preference for violent content, and making the system filter out documents that match negative preferences. Many personalized retrieval systems benefit from the implicit or explicit feedback of the user to readjust the user profile (Kelly and Teevan 2003; Rocchio and Salton 1971), a negative weight foraconceptcouldcausethatthesystemlow-rankedeverycontentthatcontainsthatconcept, disabling the possibility of obtaining a real feedback from the user for that concept, as user tend to only investigate few contents within a result set. We found many advantages to this representation, in opposition to the common keyword-based approaches: •Richness.Conceptpreferencesaremorepreciseandhavemoresemanticsthansimple keywordterms.Forinstance,ifausersatesaninterestforthekeyword‘Jaguar’,the systems does not have further information to distinguish Jaguar, the wild animal from Jaguar, the car brand. A preference stated as ‘WildAnimal:Jaguar’ (this is read as “the instanceJaguarfromthewildanimalclass)letsthesystemknowunambiguouslythe preferenceoftheuser,andalsoallowstheuseofmoreappropriaterelatedsemantics Chapter 3 — Personalized Information Retrieval53 (e.g.synonyms,hyperonims,subsumptionetc…).This,togetherwithdisambiguation techniques, leads to the effective personalization of text-based content. •Hierarchicalrepresentation.Conceptswithinanontologyarerepresentedina hierarchicalway,throughdifferenthierarchyproperties(e.g.subClassOf,instanceOf, partOfetc…).Parents,ancestors,childrenanddescendantsofaconceptgivevaluable informationabouttheconcept’ssemantics.Forinstance,theconceptanimalishighly enriched by each animal class semantics and a hypothetical taxonomy that the concept could subsume. •Inference.Ontologystandards,suchasRDF 12 andOWL 13 ,supportinference mechanismsthatcanbeusedinthesystem tofurtherenhancepersonalization, sothat, for instance, a user interested in animals (superclass of cat) is also recommended items aboutcats.Inversely,auserinterestedinlizards,snakes,andchameleonscanbe inferredtobeinoverallinterestedinreptileswithacertainconfidence.Also,auser keenofSicilycanbeassumedtolikePalermo,throughthetransitive‘locatedIn’ relation. The ontology-based representation of user interests is richer, more precise, and less ambiguous thanakeyword-basedoritem-basedmodel.Itprovidesanadequategroundingforthe representationofcoarsetofine-graineduserinterests(e.g.interestforbroadtopics,suchas football, sci-fi movies, or the NASDAQ stock market, vs. preference for individual items such as a sports team, an actor, a stock value), and can be a key enabler to deal with the subtleties of userpreferences,suchastheirdynamic,context-dependentrelevance.Anontologyprovides further formal, computer-processable meaning on the concepts (who is coaching a soccer team, an actor's filmography, financial data on a stock), and makes it available for the personalization system to take advantage of 12 http://www.w3.org/RDF/ 13 http://www.w3.org/TR/owl-features/ 54 Figur indic ‘Leisu and m not k Not o has a system have found 3.2 Explo inform prefe easily recom consi Thep weigh doma re3.1pres atesaninter ure’sub-top more specific kind of moder only hierarch a preferences mtoguessa apositivev d in section 4 A Sema oitinguser mationRetr rence-based yintroduced mmending.A idered an opt personalizati htedsemant ainontology Figure 3.1 entsanexa restaboutth pics,obtainin c preferences rn music, pre hy properties s for the USA apreference valueofpref 4.6). antic Ap profilesinv ievalsystem improvemen dtosupport Automaticp tional feature ionsystema ticmetadata O.Thatis, .User prefe ampleofcon etopic‘Leis ngfinergrain s will prevai evailing over s can be expl A, the prope fortheHaw ferenceforth proach f olvesusing m.Thegoals ntsforconte ttheretriev ersonalizatio e that users c assumesthat whichdesc eachitemJ Personaliz erences as co nceptualized sure’.Thesy ndetailsabo il over the sy r the higher t loited for pre erties ‘visit’ a waiiislands, heuser(mor for User theinform saddressed entfiltering valfunctiona onisnotapp can turn on an ttheitemsi cribetheme JeLisassoc zed Informa oncepts in an duserprefer ystemisthen outtheuser ystem’s infer topic inferen eference infe and ‘locatedI inthiscase redetailson r Profile mationcontai sofarhave andretrieva alities,such propriatein nd off at any inaretrieva eaningcarrie ciatedavecto ation Retriev ontology. rences.Let’ nabletoinf preference. rences. In th nce. erence. Suppo dIn’ could be aHawaiito npreference Exploita inedinprof ebeenfocus al,inaway assearchin allsituation y time. alspaceDa edbytheite orH(J) e | val — Chap ssupposea ferpreferenc Notethator his case the u osing that th e thus used f ouristguidew expansionc ation filestoadap sedondeliv ythatcanbe ng,browsing ns.Therefore areannotated em,interms |u,1] |0] ofdo pter 3 auser cesfor riginal user is he user for the would canbe ptthe vering every g,and eitis dwith sofa omain Chap conce conce relati Altho ofan infere conte rank A fun intere cant result Theb thatp her/h prefe prefe where of the pter 3 — Pe eptweights, eptxisimp onshipbetw ough the use nontologica encecapabil entunitscan contents (a c F ndamental no ests of a part thenbedeve ts, according basisforthe providesap hissemantic rencesofth rencesandc e N is the nu e vectors are ersonalized wherefore portantinth weenusersan of this ontol alrepresent lities(see3. nbecompute collection, a Figure 3.2.L otion for this ticular user. elopedforfi g to user pref epersonaliza personalrele preference heuser,and contentmeta umber of elem e the weights Information eachx e 0, hemeaningo ndtheindex logy layer is tationofus .1).Based ed,withwhi catalog secti Links betwee s purpose is t Building on ilteringand ferences. ationofcont evancemeas s.Themea thesemanti adataarese ments in the s assigned to n Retrieval theweight ofd.Thus, xedcontento s transparent serpreferen onpreferenc ichitispos ion, a search en user prefe the definition this measure sortingalis entretrieval sure(PRM) asureiscal cmetadatao enastwov e universe O o ontology c H(J)indic asshownin ofthesystem to the user, t nces:unamb ceweights, sibletodisc result) in a p erences and s n of a measu e, specific do stofcontent isthedefin ofadocum lculatedas ofthedocum vectorsinan of ontology oncepts in u catesthedeg nFigure3.2 m,throughth the system c biguous,rich measuresof criminate,pr personal way search space. ure of conten ocument wei titems,and nitionofam mentforau afunction ment.Inthis nN-dimensio concepts, an user preferen greetowhic 2,thereisa heontology can take adva herrelations fuserintere rioritize,filte y. . t relevance f ighting algor re-rankings matchingalgo user,accordi ofthesem scalculation onalvectors nd the coord ces and docu 55 chthe fuzzy layer. antage sand estfor erand for the rithms search orithm ingto mantic n,user space, dinates ument 56Personalized Information Retrieval — Chapter 3 annotations, representing respectively, the intensity of preference, and the degree of importance forthedocument.Semanticpreferencesalsoincludeinferredpreferences,forexample deductiveinference,soifauserexpressespreferencefortheanimalconcept,preferencesfor eachsubclassofanimal(i.e.‘Bird’concept)wouldbeinferred(formoreinformationsee section 4.6). Theprocedureformatchingthesevectorshasbeenprimarilybasedonacosinefunctionfor vector similarity computation, as follows: PRH = cos(SP ¯ , H ¯ ) = SP ¯ · H ¯ |SP| × |H| = ∑ (SP ì × H ì ) t ì=1 ¹∑ (SP ì 2 ) t ì=1 × ¹∑ (H ì 2 ) t ì=1 Equation 1. Personal Relevance Measure, SP= Semantic Preferences, M=Content metadata whereSP ¯ standsforthesemanticpreferencesP(u)oftheuseruandH ¯ isthemetadataM(d) related to the document d. Figure3.3isthevisualrepresentationofsimilaritybetweenvectors,consideringonlyathree- dimensionspace.AsintheclassicInformationRetrievalvector-model(RicardoandBerthier 1999), information expressed in vectors are more alike as more close are the vectors represented inthefinite-dimensionalspace.InclassicIR,onevectorrepresentsthequeryandtheother matching vectors are the representation of the documents. In our representation, the first vector istheuserpreference,whereasthesecondvectorsarealsoessentiallytherepresentationthe content in the system’s search space. Chapter 3 — Personalized Information Retrieval57 x 3 x 1 x 2 {x 1 , x 2 , x 3 } = domain ontology O α 2 α 1 Figure 3.3.Visual representation of metadata and preference's vector similarity The PRM algorithm thus matches two concept-weighted vectors and produces a value between [-1, 1]. Values near -1 indicate that the preference of the user do not match the content metadata (i.e. two vectors are dissimilar), values near 1 indicate that the user interests do match with the content.Notethatnotalltimesthesystemcanhaveweightedannotationsattachedtothe documents,orisablehaveanalysistoolsthatproduceweightedmetadata,butincasenot,the PRMfunctionwouldassignaweightof1bydefaulttoallmetadata.Evenso,itwillbe interesting to keep the ability to support weighted annotations, for reusability in systems that do provide these values (see e.g. (Vallet et al. 2005)). For instance, Figure 3.4 shows a setting where O = {Flower, Dog, Sea, Surf, Beach, Industry} is the set of all domain ontology terms (classes and instances). According to her profile, the user is interestedintheconceptsof‘Flower’,‘Surf’,and‘Dog’,withdifferentintensity,andhasa negativepreferencefor‘Industry’.ThepreferencevectorforthisuseristhusSP ¯ = (u.7,1.u,u.u,u.8,u.2, -u.7). A still image is annotated with the concepts of ‘Dog’, ‘Sea’, ‘Surf’ and ‘Beach’, therefore the corresponding metadata vector is H ¯ = (u.u,u.8,u.6,u.8,u.2,u.u). 58Personalized Information Retrieval — Chapter 3 Class Weight Class Weight Flower Industry Surf Dog Dog Sea Surf Beach 0.7 ‐ 0.7 0.8 1.0 0.8 0.6 0.8 0.2 O= {Flower, Dog, Sea, Surf, Beach, Industry}  Semantic interests Content metadata {Flower, Dog, Sea, Surf, Beach, Industry}  {Flower, Dog, Sea, Surf, Beach, Industry}  ={0.7,     1.0,  0.0,   0.8,   0.0,       ‐0.7}  ={0.0,      0.8,   0.6,   0.8,   0.2,        0.0}  Figure 3.4.Construction of two concept-weighted vectors. The PRM of the still image for this user shall therefore be: PRH = (0.7×0.0)+(1.0×0.8)+(0.0×0.6)+(0.8×0.8)+(0.0×0.2)+(-0.7×0.0) .0.7 2 +1 2 +0.0 2 + 0.8 2 +0.0 2 +(-0.7) 2 ×√0.0 2 +0.8 2 +0.6 2 + 0.8 2 +0.2 2 +0.0 2 = u.69 Thismeasurecanbecombinedwiththerelevancemeasurescomputedbytheuser-neutral algorithms,producingapersonalizedbiasontherankingofsearchresults,asexplainedinthe following section. 3.2.1Personalized Information Retrieval Search personalization is mainly achieved in our system by a document weighting approach (see document weighting on section 2.1.3). This approach may consist of cutting down (i.e. filtering) searchresults,reorderingtheresultsorprovidingsomesortofnavigationsupport.ThePRM measure described in the preceding section would act as the personalization score, i.e. the user- document similarity value. Personalizationofsearchmustbehandledcarefully.Anexcessivepersonalbiasmaydrive resultstoofarfromtheactualquery.Thisiswhywehavetakenthedecisiontodiscardquery reformulationtechniques,adoptdocumentweightingtechniques,suchasuserpersonalized filteringandresultreorderingasapostprocesstotheexecutionofqueries.Still,the Chapter 3 — Personalized Information Retrieval59 personalized ranking defined by the PRM values should be combined with the query-dependent rank(QDR)valuesreturnedbytheintelligentretrievalmodules.Thatis,thefinalcombined rank(CR)ofadocumentd,givenauseruandherqueryqisdefinedasafunctionofboth values: CR(J, o, u) = ¡(PRH(H d ¯ , SP u ¯ ), 0ÐR(J, o)) Equation 2. Final personalized Combined Rank. Thequestionremainsastohowbothvaluesshouldbecombinedandbalanced.Asaninitial solution, we use a linear combination of both: CR(J, o, u) = z · PRH(H d ¯ , SP u ¯ ) +(1 -z) · 0ÐR(J, o) Equation 3. Linear combination of PRM and QDR. wherethevalueofz,between0and1,determinesthedegreeofpersonalizationofthe subsequent search ranking. What is an appropriate value for z, how it should it be set, and whether other functions different fromalinearcombinationwouldperformbetter,areworkinprogressinthistask,butsome initialsolutionshavebeenoutlined(Castellsetal.2005).Explicituserrequests,queriesand indicationsshouldalwaystakeprecedenceoversystem-learneduserpreferences. Personalizationshouldonlybeusedto“fillthegaps”leftbytheuserintheinformationshe provides,andalwayswhentheuseriswillingtobehelpedthisway.Therefore,thelargerthe gap, the more room for personalization. In other words, the degree of personalization z can be proportional to the size of this gap. One possible criterion to estimate this gap is by measuring thespecificityofthequery.Thiscanbeestimatedbymeasuringthegeneralityofthequery terms(e.g.bythedepthandwidthoftheconcepttreeunderthetermsintheontology),the numberofresults,ortheclosenessofrankvalues.Forinstance,thetopicof‘Sports’israther high in the hierarchy, has a large number of subtopics, a large number of concepts belong to this topic, and a query for ‘Sports’ would probably return contents by the thousands (of course this dependsontherepository).Itthereforeleavesquitesomeroomforpersonalization,which would be a reason for raising zin this case. Ultimately, personalized ranking, as supported by the adapted IR system, should leave degree of personalizationasanoptionalparameter,soitcouldbesetbytheuserherself,asinGoogle personalized web search . See also (Dwork et al. 2001; Lee 1997; Manmatha et al. 2001; Renda and Straccia 2003; Vogt and Cottrell 1999) for state of the art on combining rank sources. 60Personalized Information Retrieval — Chapter 3 Buildingonthecombinedrelevancemeasuredescribedabove,apersonalizedrankingis defined, which will be used as the similarity measure for the result reordering. Thepersonalrelevancemeasurecanalsobeusedtofilterandorderlistsofdocumentswhile browsing.Inthiscasetheroomforpersonalizationishigher,ingeneral,whencomparedto search,sincebrowsingrequestsareusuallymoreunspecificthansearchqueries.Moreover, browsing requests, viewed as light queries, typically consist of boolean filtering conditions (e.g. filterbydateorcategory),andstrictorderings(bytitle,author,date,etc.).Ifanyfuzzyfilters are defined (e.g. when browsing by category, contents might have fuzzy degrees of membership tocategory),thepersonalizationcontrolissuesdescribedabovewouldalsoapplyhere. Otherwise, personalization can take over ranking all by itself (again, if requested by the user). Ontheotherhand,thePRMmeasure,combinedwiththeadvancedbrowsingtechniques providesthebasisforpowerfulpersonalizedvisualclues.Anycontenthighlightingtechnique can be played to the benefit of personalization, such as the size of visual representations (bigger meansmorerelevant),colorscale(e.g.closertoredmeansmoreinteresting),positionin3D space(foregroundvs.background),automatichyperlinks(tointerestingcontents),etc. Chapter 4 4 Personalization in Context Specific, advanced mechanisms need to be developed in order to ensure that personalization is used at the right time, in the appropriate direction, and in the right amount. Users seem inclined to rely on personalized features when they need to save time, wish to spare efforts, have vague needs,havelimitedknowledgeofwhatcanbequeriedfor(e.g.forlackoffamiliaritywitha repository,orwiththequeryingsystemitself),orarenotawareofrecentcontentupdates. Personalizationisclearly notappropriate,forinstance,whentheuserislookingforaspecific, knowncontentitem,orwhentheuseriswillingtoprovidedetailedrelevancefeedback, engaginginamoreconscientiousinteractivesearchsession.Evenwhenpersonalizationis appropriate, user preferences are heterogeneous, variable, and context-dependent. Furthermore, thereisinherentuncertaintyinthesystemwhenautomaticpreferencelearningisused.Tobe accurate,personalizationneedstocombinelong-termpredictivecapabilities,basedonpast usage history, with shorter-term prediction, based on current user activity, as well as reaction to (implicitorexplicit)userfeedbacktopersonalizedoutput,inordertocorrectthesystem’s assumptions when needed. Theideaofcontextualpersonalization,proposedanddevelopedhere,respondstothefactthat humanpreferencesaremultiple,heterogeneous,changing,andevencontradictory,andshould be understood in context with the user goals and tasks at hand. Indeed, not all user preferences are relevant in all situations. For instance, if a user is consistently looking for some contents in the Formula 1 domain, it would not make much sense that the system prioritizes some Formula 1 picture with a helicopter in the background, as more relevant than others, just because the user happens to have a general interest for aircrafts. In the semantic realm of Formula 1, aircrafts are outof(oratleastfarfrom)context.Takingintoaccountfurthercontextualinformation, availablefrompriorsetsofuseractions,thesystemcanprovideanundisturbed,clearviewof the actual user’s history and preferences, cleaned from extraordinaryanomalies, distractions or “noise”preferences.Werefertothissurroundinginformationascontextualknowledgeorjust context,offeringsignificantaidinthepersonalizationprocess.Theeffectandutilityofthe proposed techniques consists of endowing a personalized retrieval system with the capability to 62Personalization in Context — Chapter 4 filterandfocusitsknowledgeaboutuserpreferencesonthesemanticcontextofongoinguser activities, so as to achieve coherence with the thematic scope of user actions at runtime. As already discussed the background section of this work, context is a difficult notion to grasp andcaptureinasoftwaresystem.Inourapproach,wefocusoureffortsonthismajortopicof retrieval systems, by restricting it to the notion of semantic runtime context. The latter forms a partofgeneralcontext,suitableforanalysisinpersonalizationandcanbedefinedasthe background themes under which user activities occur within a given unit of time. From now on weshallrefertosemanticruntimecontextastheinformationrelatedtopersonalizationtasks and we shall use the simplified term context for it.The problems to be addressed include how to represent the context, how to determine it at runtime (acquisition), and how to use it to influence the activation of user preferences, "contextualize" them and predict or take into account the drift of preferences over time (short and long-term). As will be described in section 4.3, in our current solution to these problems, a runtime context isrepresentedas(isapproximatedby)asetofweightedconceptsfromthedomainontology. Howthissetisdetermined,updated,andinterpreted,willbeexplainedinsection4.4.Our approachtothecontextualactivationofpreferencesisthenbasedonacomputationofthe semantic similarity between each user preference and the set of concepts in the context, as will be shown in section 4.5.1. In spirit, the approach tries to find semantic paths linking preferences tocontext.Theconsideredpathsaremadeofexistingsemanticrelationsbetweenconceptsin thedomainontology.Theshorter,stronger,andmorenumeroussuchconnectingpaths,the more in context a preference shall be considered. TheproposedtechniquestofindthesepathstakeadvantageofaformofConstraintSpreading Activation (CSA) strategy (Crestani 1997), as will be explained in section 4.5. In the proposed approach,asemanticexpansionofbothuserpreferencesandthecontexttakesplace,during whichtheinvolvedconceptsareassignedpreferenceweightsandcontextualweights,which decay as the expansion grows farther from the initial sets. This process can also be understood asfindingasortoffuzzysemanticintersectionbetweenuserpreferencesandthesemantic runtimecontext,wherethefinalcomputedweightofeachconceptrepresentsthedegreeto which it belongs to each set. Finally,theperceivedeffectofcontextualizationshouldbethatuserintereststhatareoutof focus, under a given context, shall be disregarded, and only those that are in the semantic scope of the ongoing user activity (the "intersection" of user preferences and runtime context) will be consideredforpersonalization.Assuggestedabove,theinclusionorexclusionofpreferences needs not be binary, but may range on a continuum scale instead, where the contextual weight Chapter 4 — Personalization in Context63 of a preference shall decrease monotonically with the semantic distance between the preference and the context. 4.1Notation Beforecontinuing,weprovideafewdetailsonthemathematicalnotationthatwillbeusedin thesequel.Itwillbeexplainedagaininmostcaseswhenitisintroduced,butwegatheritall here, in a single place, for the reader's convenience. O The domain ontology (i.e. the concept space). R The set of all relations in O. DThe set of all documents or content in the search space. M : D → [0,1] |O| Amappingbetweendocumentandtheirsemantic annotations,i.e.M(d)∈|u,1] |0| istheconcept-vector metadata of a documentd ∈ D. U The set of all users. P The set of all possible user preferences. C The set of all possible contexts. P O , C O AninstantiationofPand CforthedomainO,wherePis representedbythevector-space|-1,1] |0| andCby |u,1] |0| . P : U → P A mapping between users and preferences, i.e. P(u) ∈ P is the preference of user u ∈ U. C : U × N → CAmappingbetweenusersandcontextsovertime,i.e. C(u,t) ∈ C is the context of a user u ∈ U at an instant t ∈ N. EP : U → PExtended user preferences. EC : U × N → CExtended context. CP : U × N → PContextualizeduserpreferences,alsodenotedas Φ(P(u),C(u,t)). v x , where v ∈ [-1,1] |O| We shall use this vector notation for concept-vector spaces, wheretheconceptsofanontologyOaretheaxisofthe vectorspace.Foravectorv∈[-1,1] |O| ,v x ∈[-1,1]isthe coordinateofvcorrespondingtotheconceptx∈O.This notationwillbeusedforalltheelementsranginginthe |-1,1] |0| space,suchasdocumentmetadataM x (d),user preferences P x (u), runtime context C x (u,t), and others. 64Personalization in Context — Chapter 4 QThesetofallpossibleuserrequests,suchasqueries, viewing documents, or browsing actions. prm : D × U × N → [-1,1]prm(d,u,t) is the estimated contextual interest of user u for the document d at instant t. sim : D × Q → [0,1]sim(d,q) is the relevance score computed for the document dforarequestqbyaretrievalsystemexternaltothe personalization system. score :D×Q×U×N→[-1,1]score(d,q,u,t)isthefinalpersonalizedrelevancescore computed by a combination of sim and prm. 4.2Preliminaries Ourstrategiesforthedynamiccontextualizationofuserpreferencearebasedonthreebasic principles:a)therepresentationofcontextasasetofdomainontologyconceptsthattheuser has“touched”orfollowedinsomemannerduringasession,b)theextensionofthis representation of context by using explicit semantic relations among concepts represented in the ontology, and c) the extension of user preferences by a similar principle. Roughly speaking, the “intersection”ofthesetwosetsofconcepts,withcombinedweights,willbetakenastheuser preferencesofinterestunderthecurrentfocusofuseraction.Theontology-basedextension mechanismswillbeformalizedonthebasisofanapproximationtoconditionalprobabilities, derivedfromtheexistenceofrelationsbetweenconcepts.Beforethemodelsandmechanisms areexplainedindetail,somepreliminarygroundforthecalculationofcombinedprobabilities will be provided and shall be used in the sequel for our computations. Given a finite set Ω, and a ∈ Ω, let P(a) be the probability that a holds some condition. It can be shown that the probability that a holds some condition, and it is not the only element in Ω that holds the condition, can be written as: P(o r U x xeΩ-{u] ) = ∑ |(-1) |S|+1 ∏ P(x) · P(o|x) xeS | ScΩ-{u] Equation 4. Probability of holding condition a, inside a finite set Ω. Providedthata∩xaremutuallyindependentforallx∈Ω(therighthand-sideoftheabove formula is based on the inclusion-exclusion principle applied to probability (Whitworth 1965)). Furthermore, if we can assume that the probability that a is the only element in Ω that holds the condition, then the previous expression is equal to P(a). Weshallusethisformofestimating“theprobabilitythataholdssomecondition”withtwo purposes: a) to extend user preferences for ontology concepts, and b) to determine what parts of Chapter 4 — Personalization in Context65 userpreferencesarerelevantforagivenruntimecontext,andshouldthereforebeactivatedto personalizetheresults(theranking)ofsemanticretrieval,aspartoftheprocessdescribedin detailbyCrestani(1997).Intheformercase,theconditionwillbe“theuserisinterestedin concepta”,thatis,P(a)willbeinterpretedastheprobabilitythattheuserisinterestedin concept a of the ontology. In the other case, the condition will be “concept a is relevant in the currentcontext”.Inbothcases,theuniverseΩwillcorrespondtoadomainontologyO(the universe of all concepts). In both cases, Equation 4 provides a basis for estimating P(a) for all a∈Ofrom an initial set of conceptsxforwhichweknow(orwehaveanestimationof)P(x).Inthecaseofpreferences, thissetistheinitialsetofweighteduserpreferencesforontologyconcepts,whereconcept weightsareinterpretedastheprobabilitythattheuserisinterestedinaconcept.Intheother case, the initial set is a weighted set of concepts found in elements (links, documents, queries) involvedinuseractionsinthespanofasessionwiththesystem.Herethissetistakenasa representationofthesemanticruntimecontext,whereweightsrepresenttheestimated probability that such concepts are important in user goals. In both cases, Equation 4 will be used to implement an expansion algorithm that will compute probabilities (weights) for all concepts starting from the initially known (or assumed) probabilities for the initial set. In the second case, thealgorithmwillcomputeacontextrelevanceprobabilityforpreferredconceptsthatwillbe used as the degree of activation that each preference shall have. Put in rough terms, the (fuzzy) intersection of context and preferences will be found with this approach. Equation 4 has some interesting properties with regards to the design of algorithms based on it. In general, for X={x ì ] ì=0 n , where x i ∈ [0,1], let us define R(X): R(X) =`{(-1) |S|+1 × |x ì ìcS ¦ ScH n Equation 5. Probability of holding a condition for over a set of independent variables. It is easy to see that this function has the following properties: •R (X) ∈ [0,1] •R (O) = 0 •R (X) ≥ x i for all i (in particular this means that R (X) = 1 if x i = 1 for some i). •R (X) increases monotonically with respect to the value of x i , for all i. •R (X) can be defined recursively as: R (X) = x 0 + (1 – x 0 ) · R({x ì ] ì=1 n ). R (X) can thus be computed efficiently. Note also that R(X) does not vary if we reorder X. These properties will be useful for the definition of algorithms with computational purposes. 66Personalization in Context — Chapter 4 NotethatthepropertiesofR(X)canonlybeingeneralsatisfiedif x i ∈[0,1].Letussuppose now that we are using R (X) for the estimation of the set of preferences P(a), given an initial set P(x).WehavedefinedP(x)∈[-1,1],P(a)∈[-1,1].Whilethesubsetofpositivepreferences P + (x)∈[0,1]doessatisfytherestrictionofR(X),thesubsetofnegativepreferences,i.e.the subset P - (x) ∈ [-1,0) does not. Furthermore, the negative preferences value perverts Equation 4, as it is based on pure probability computation. For solving this we can redefine P - (x) as P - (x) = 1 + (P - (x)) ∈ (0,1] as the probability of disliking the concept x. Thus, whenever we want to estimate the final value of preferences P(a), we will calculate it as: P(o) = R(P + (x)) - R(P - (x)) Equation 6. Independent calculation of negative and positive preferences. That is, we will calculate separately the estimation of preferences for the probability of liking a conceptxandtheprobabilityofdislikingit.ThefinalP(o)∈[-1,1]wouldbetheresultof subtractingthe probability of disliking a concept x to the probability of liking it. We therefore see the properties of liking and disliking as antagonistic. 4.3Semantic Context for Personalized Content Retrieval Ourmodelforcontext-basedpersonalizationcanbeformalizedinanabstractwayasfollows, without any assumption on how preferences and context are represented. Let U be the set of all users, let C be the set of all contexts, and P the set of all possible user preferences. Since each userwillhavedifferentpreferences,letP :U→Pmapeachusertoher/hispreference. Similarly,eachuserisrelatedtoadifferentcontextateachstepinasessionwiththesystem, which we shall represent by a mapping C : U × N → C, since we assume that the context evolves over time. Thus we shall often refer to the elements from P and C as in the form P(u) and C(u, t) respectively, where u ∈ U and t ∈ N. Definition1.LetCbethesetofallcontexts,andletPbethesetofallpossibleuser preferences. We define the contextualization of preferences as a mapping Φ : P × C → P so that for all p ∈ P and c ∈ C, p |= Φ (p,c). Chapter 4 — Personalization in Context67 In this context the entailment p |= q means that any consequence that could be inferred from q couldalsobeinferredfromp.Forinstance,givenauseru∈U,ifP(u)=qimpliesthatu “likes/dislikes x” (whatever this means), then u would also “like x” if her preference was p. Nowwecanparticularizetheabovedefinitionforaspecificrepresentationofpreferenceand context.Inourmodel,weconsideruserpreferencesastheweightedsetofdomainontology concepts for which the user has an interest, where the intensity of interest can range from -1 to 1. Definition 2. Given a domain ontology O, we define the set of all preferences over O as P O =[-1,1] |O| ,wheregivenp∈P O ,thevaluep x representsthepreferenceintensityfora concept x ∈ O in the ontology. Definition 3. Under the above definitions, we particularize |= O as follows: given p, q ∈ P O , p |= O q ⇔ ∀x ∈ O, either q x ≤ p x , or q x can be deduced from p using consistent preference extension rules over O. Now, our particular notion of context is that of the semantic runtime context,which we define as the background themes under which user activities occur within a given unit of time. Definition4.GivenadomainontologyO,wedefinethesetofallsemanticruntime contexts as C O = [0,1] |O| . With this definition, a context is represented as a vector of weights denoting the degree to which a concept is related to the current activities (tasks, goals, short-term needs) of the user. Inthenextsections,wedefineamethodtobuildthevaluesofC(u,t)duringausersession,a modeltodefineΦ,andthetechniquestocomputeit.Oncewedefinethis,theactivateduser preferences in a given context will be given by Φ (P(u),C(u, t)). 4.4Capturing the Context Previously analyzed implementation-level representation of semantic runtime context is defined as a set of concepts that have been involved, directly or indirectly, in the interaction of a user u withthesystemduringaretrievalsession.Therefore,ateachpointtintime,contextcanbe represented as a vector C(u,t)∈[0,1] |O| of concept weights, where each x∈O is assigned a weight 68Personalization in Context — Chapter 4 C x (u,t)∈[0,1]. This context value may be interpreted as the probability that x is relevant for the current semantic context. Additionally, time is measured by the number of user requests within a session. Since the fact that the context is relative to a user is clear, in the following we shall often omit this variable and use C(t), or even C for short, as long as the meaning is clear. Inourapproach,C(t)isbuiltasacumulativecombinationoftheconceptsinvolvedin successiveuserrequests,insuchawaythattheimportanceofconceptsfadesawaywithtime. Thissimulatesadriftofconceptsovertime,andageneralapproachtowardsachievingthis follows. This notion of context extraction is extracted from the implicit feedback area (White et al. 2005b), concretely our model is part of the ostensive models, as one that uses a time variable andgivesmoreimportancetoitemsoccurringcloseintime(CampbellandvanRijsbergen 1996). Right after each user’s request, a request vector Req(t) ∈ C O is defined. This vector may be: ƒThe query concept-vector, if the request is a query. ƒA concept-vector containing the topmost relevant concepts in a document, if the request is a “view document” request. ƒThe average concept-vector corresponding to a set of documents marked as relevant by the user, if the request is a relevance feedback step. ƒIf the request is a “browse the documents under a category c” request, Req(t) is the sum of the vectorrepresentation of the topic c (in the [0,1] |O| concept vector-space), plus the normalizedsumofthemetadatavectorsofallthecontentitemsbelongingtothis category. Asthenextstep,aninitialcontextvectorC(t)isdefinedbycombiningthenewlyconstructed request vector Req(t) from the previous step with the context C(t–1), where the context weights computedinthepreviousstepareautomaticallyreducedbyadecayfactorξ,arealvaluein [0,1], where ξ may be the same for all x, or a function of the concept or its state. Consequently, at a given time t, we update C x (t) as C x (t) = ξ · C x (t – 1) + (1 – ξ) · Req x (t) Equation 7. Runtime semantic context The decay factor will define forhow many action units will be a concept considered, and how fastaconceptwillbe“forgotten”bythesystem.Thismayseemsimilartopseudo-relevance Chapter 4 — Personalization in Context69 feedback, but it is not used to reformulate the query, but to focus the preference vector as shown in the next section . 4.5Semantic Extension of Context and Preferences Theselectiveactivationofuserpreferencesisbasedonanapproximationtoconditional probabilities:givenx∈OwithP x (u)∈[-1,1]forsomeu∈U,i.e.aconceptonwhichauseru hassomeinterest/dislike,theprobabilitythatxisrelevantforthecontextcanbeexpressedin terms of the probability that x and each concept y directly related to x in the ontology belong to the same topic, and the probability that y is relevant for the context. With this formulation, the relevance of x for the context can be computed by a constrained spreading activation algorithm, starting with the initial set of context concepts defined by C. Ourstrategyisbasedonweightingeachsemanticrelationrintheontologywithameasure w(r,x,y) that represents the probability that given the fact that r(x,y), x and y belong to the same topic.Asexplainedabove,wewillusethisasacriteriaforestimatingthecertaintythatyis relevantforthecontextifxisrelevantforthecontext,i.e.w(r,x,y)willbeinterpretedasthe probability that a concept y is relevant for the current context if we know that a concept x is in the context, and r(x,y) holds. Based on this measure, we define an algorithm to expand the set of contextconceptsthroughsemanticrelationsintheontology,usingaconstrainedspreading activationstrategyoverthesemanticnetworkdefinedbytheserelations.Asaresultofthis strategy,theinitialcontextC(t)isexpandedtoalargercontextvectorEC(t),whereofcourse EC x (t) ≥ C x (t) for all x∈O. Let R be the set of all relations in O, let R ´ = R∪R -1 =R∪{r -1 | r∈R }, and w : R ´ → [0,1]. The extended context vector EC(t) is computed by: ( ) ( ) ( ) ( ) ( ) ( ) { } ( ) ( ) , , , if0 , , powerotherwise y y y x x r r x y C t C t EC t R EC t w r x y x ∈ ∈ ⎧ > ⎪ = ⎨ ⋅ ⋅ ⎪ ⎩ O R Equation 8. Expanded context vector whereR has been is the set of allconcept relation in the ontology O andR -1 is theset ofall inverse relations of R, i.e. a concept x has an inverse relation r -1 (x,y) {exists r(y,x) | r∈R}. 70Personalization in Context — Chapter 4 Finally,power(x)∈[0,1]isapropagationpowerassignedtoeachconceptx(bydefault, power(x)=1).Notethatweareexplicitlyexcludingthepropagationbetweenconceptsinthe input context (i.e. these remain unchanged after propagation). 4.5.1Spreading Activation Algorithm Thealgorithmsforexpandingpreferencesandcontextwillbebasedonthesocalled ConstrainedSpreadingActivation(CSA)strategy(seee.g.(Crestani1997;CrestaniandLee 1999;CrestaniandLee2000)).ThefirstworkonCSAwasdevelopedbySaltonandBuckley (SaltonandBuckley1988).Anotherrelevantreferenceis(Rochaetal.2004),whereCSAis used to improve the recall of a retrieval system using domain ontologies. Based on definition 2, EC(t) can be computed as follows, where C 0 (t) = { x∈O | C x (t) > 0} is the initial updated input with new context values resulting after the current request. Given x∈O, we define the semantic neighborhood of x as N[x] = {(r, y) ∈ R ´ ×O | r (x,y)}. This algorithm can also be used as a standalone method for expanding preferences (i.e. compute the EP vector from the initial P), except that time is not a variable, and a different measure w is used. Figure 4.1shows a simple pseudocode of the algorithm. Figure 4.1. Simple version of the spreading activation algorithm. Toexemplifytheexpansionprocess,Figure4.2showsasimplepreferenceexpansionprocess, wherethreeconceptsareinvolved.Theuserhaspreferencesfortwooftheseconcepts,which expand (C, EC, w) for x∈O do EC x = C x // Initialization of Expanded Context in_path[x] ← false for x∈C 0 do expand (x, w, 0) expand (x, w, prev_cx) in_path[x] ← true for (r,y) ∈ N[x] do // Optimization: choose (r,y) in decreasing order ofEP y if not in_path[y] and C y = 0 and EC y < 1 then // The latter condition to save some work prev_cy ← EC y // Undo last update from x EC y ← (EC y – w(r,x,y) * power(x) * prev_cx) / (1 – w(r,x,y) * power(x) * prev_cx) EC y ← EC y + (1 – EC y )* w(r,x,y) * power(x) * EC x if EC y > ε then expand (y, w, prev_cy) in_path[x] ← false Chapter 4 — Personalization in Context71 are related to a third through two different ontology relations. The expansion process show how a third preference can be inferred, “accumulating” the evidences of preference from the original two preferences. preference for x = p x r 1 (x,y) Beach x Sea y nextTo r 1 p x 0.8 1) p y 1 =0.4 = 0.8 × 0.5 w (r 1 ) 0.5 ⇒ preference for y= p y 1 = p x · w (r 1 ,x,y) 2) p y 2 =0.724 = 0.4 + (1 - 0.4) × 0.9 × 0.6 Domain ontology C C Boat z p z 0.6 C r 2 1) preference for z = p z r 2 (z,y) ⇒ preference for y =p y 2 = p y 1 +(1 - p y 1 ) · p z · w (r 2 ,y,z) 2) w (r 2 ) Preference Expansion Figure 4.2. Example of preference expansion with the CSA algorithm The simple expansion algorithm can be optimized as follows, by using a priority queue (a heap H) w.r.t. EC x , popping and propagating concepts to their immediate neighborhood (i.e. without recursion). This way the expansion may get close to O (M log N) time (provided that elements are not often pushed back into H once they are popped out of H), where N = |O| and M = |J ` |. With the suggested optimizations, here M log N should be closer to M log |C 0 |.The algorithm would thus be: 72Personalization in Context — Chapter 4 Figure 4.3. Priority queue variation of the spreading activation algorithm There are a lot of optimizations in the CSA state of the art that try to prune the whole possible expansion tree, the most common were also adapted into the algorithm: •Donotexpandanodemorethann j jumps.Thisisthebasic“stopcondition”inCSA algorithms.Themotivationisnotexpandingtoconceptsthatare“meaningfully”far awayfromtheoriginalconcept.Forinstanceexpandingtheinterestforcatsto ‘LiveEntity’ does not add any useful semantics. •Do not expand a node (or expand with a reduction degree of w c = 1 n c ) that has a fan-out greaterthann c .Thegoalistoreducetheeffectof“Hub”nodesthathavemany relationswithotherconcepts.Forinstance,ifauserisinterestedinagroupof companiesthattradeontheNasdaqstockexchangeandbelongtotheComputerand Hardwaresector,acorrectinferenceisthattheusercouldlikeothercompanieswith expand (C, EC, w) for x∈O do EC x = C x // Optimize: insert elements x with C x > 0, and copy the rest at the end H ← build_heap (O × {0}) while H ≠∅ do (x, prev_cx) ← pop(H) // Optimization: make heapify stop at the first x with ECx = 0 if EC x < ε then stop // Because remaining nodes are also below ε (a fair saving) for (r,y) ∈ N[x] do if C y = 0 and EC y < 1 then // Note that it is possible that y ∉ H and yet ECy be updated prev_cy ← EC y // Undo last update from x EC y ← (EC y – w(r,x,y) * power(x) * prev_cx) / (1 – w(r x,y) * power(x) * prev_cx) // Optimize: heapify stops as soon as ECz = 0 EC y ← EC y + (1 – EC y ) * w(r,x,y) * power(x) * EC x push (H,y,prev_cy) // Optimize again: push starts from the first z with ECz > 0 Chapter 4 — Personalization in Context73 thesetwofeatures,butaninferencecouldbeconsideredincorrectifpropagatesthe preference to the class ‘Company’ and expand to a thousand other companies that don’t have anything to do with the originals. •Once that a node has been expanded up to n h hierarchical properties do not expand the node any more down through hierarchical properties. The intention is to not generalize a preference (semantically) more than once, as this is a risky assumption to make with the originaluser’spreferences.Forinstance,intheexampleofsection3.1,weretheuser likes snakes, lizards, and chameleons, the system can infer quite safely that the user has a probability to like reptiles in general, but it could seem not so straightforward to infer a preference for any kind of animal in general. Figure4.4showsafinalversionofthealgorithmwithpriorityqueueandoptimization parameters: 74Personalization in Context — Chapter 4 Figure 4.4. Parameter optimized variation with priority queue of the spreading activation algorithm. The spreading activation algorithm is rich in parameters, and normally they have to be set according to the ontology or ontologies used for the preference expansion. Ontologies are varied on structure and definition, specialized ontologies usually have a high level of profundity, and expand (C, EC, w) for x∈O do EC x = C x // Optimize: insert elements x with Cx > 0, and copy the rest at the end H ← build_heap (O × {0},hierarchy_level = 0, expansion_level= 0) while H ≠∅ do // Optimize here: make heapify stop at the first y with ECy = 0 (x, prev_cx,hierarchy_level, expansion_level) ← pop(H) if EC x < ε then stop // Because remaining nodes are also below ε (a fair saving) // Jump limit condition stop if expansion_level 0 push (H,y,prev_cy,hierarchy_level,expansion_level) Chapter 4 — Personalization in Context75 general ontologies usually a high amount of topic-concepts, with high level of fan-out for every topic. A summary of these parameters can be found in Table 4.1. w(r,x,y)/w(r) Probability that a concept y is relevant for the current context if we know that a concept x is in the context or in the user profile, and r(x,y) holds. Also seen as the power of preference/context propagation that the relation r∈ 3 ´ has for concepts x and y. Perhaps the most important parameter of the CSA algorithm, and also the most difficult parameter to decide. In our experiments (see section 5) this values were empirically fixed for every property in the ontology, not taking into account the involved concepts of the relation, this can be express as w(r). Future work will be to study the power of propagation with the involved concepts, studying techniques of semantic relation between two concepts of the same ontology. power(x) The power of preference/context propagation that a single concept x has. By default equal to 1. ε The minimum threshold weight value of that a concept has to have in order to expand its weight to related concepts. A high threshold value would improve the performance of the propagation algorithm, as less expansion actions are to be made. However, higher values of this threshold do exploit less the underlying semantics of the KB, thus resulting in poorer propagation inferences. n j Number maximum of expansions that the algorithm does from a single concept. Similar to the threshold value ε. This parameter has to be set as a tradeoff of performance versus quality of inference. w e (x,n e ) Reduction factor w e of the extended context/preference x applied to a node with a fan-out of n e. . In our implementation w e is defined as w e (x, n e ) = 1 n e n h Maximum number of times that a concept can be generalized. Table 4.1. Spreading activation algorithm optimization parameters 4.5.2Comparison to Classic CSA The proposed algorithm is a variation of the Constrained Spreading activation (CSA) technique originallyproposedbySalton(SaltonandMcGill1986).HereECcorrespondstoI,andC corresponds to the initial values of I when the propagation starts. The output function fis here a threshold function, i.e. O j = I j if I j > ε, and 0 otherwise. The activation threshold k j , here ε, is the same for all nodes. In practice, we use a single property, EC, for both O and I. Instead of making , j i i j i I O w = ⋅ ∑ , we compute{ } ( ) , j i i j i I R O w = ⋅ , whereby I j is normalized to [0,1], and corresponds to the probability that node j is activated, in terms of the probability of 76Personalization in Context — Chapter 4 activation of each node i connected to j. w i,j is here w(r)· power(i), where r(i,j)for some r∈J ` , andthepowerpropertyofiisanode-dependentpropagationfactor.Sincethegraphisa multigraph, because there may be more than one relation between any two nodes, w i,j has to be counted as many times as there are such relations between i and j. Theterminationcondition,excludingtheoptimizationparameters,isthatnonodewithI j >ε remains that has never been expanded. Finally, the propagation to (and therefore through) a set of initialnodes (the ones that havean non-zero initial input value) is inhibited. Regarding the spreading constraints, in the current version of our algorithm we use: ƒDistance constraint. ƒFan-out constraint. ƒPath constraint: links with high weights are favored, i.e. traversed first. ƒActivation constraint. 4.6Semantic Preference Expansion In real scenarios, user profiles tend to be very scattered, especially in those applications where userprofileshavetobemanuallydefined,sinceusersareusuallynotwillingtospendtime describingtheirdetailedpreferencestothesystem,evenlesstoassignweightstothem, especially if they do not have a clear understanding of the effects and results of this input. Even whenanautomaticpreferencelearningalgorithmisapplied,usuallyonlythemain characteristics of user preferences are recognized, thus yielding profiles that may entail a lack of expressivity Inordertomeetthislimitation,twonovelapproachesmaybefollowed:a)theextensionof preferencesthroughontologyrelations,followingthesameapproach,andeventhesame algorithm,thatisusedtoexpandtheruntimecontext;andb)acontext-sensitiveexpansion, basedontherepresentationandexploitationprocessingoftheinitialsetofuserpreferences. Both approaches are described, presented, and left here as alternatives towards a common goal. 4.6.1Stand-alone Preference Expansion The first proposed approach for preference extension is to follow the same mechanism that was used for the extension of the semantic context, as described in section 4.5. The main difference isthathererelationsareassigneddifferentweightsw’(r,x,y)forpropagation,sincethe inferences one can make on user preferences, based on the semantic relations between concepts, are not necessarily the same as one would make for the contextual relevance. Chapter 4 — Personalization in Context77 For instance, if a user is interested in the Sumo sport, and the concept Japan is in the context, we assume it is appropriate to activate the specific user interest for Sumo, to which the country has a strong link: we have nationalSport (Japan, Sumo), and w(nationalSport, Japan, Sumo) is high. However, if a user is interested in Japan, we believe it is not necessarily correct to assume that her interest for Sumo is high, and therefore w’(nationalSport, Japan, Sumo) should be low. Ingeneral,itisexpectedthatw’(r,x,y)≤w(r,x,y),i.e.userpreferencesareexpectedtohavea shorter expansion than has the context. Given an initial user preference P, the extended preference vector EP is defined by: ( ) ( ) { } ( ) ( ) , , , if0 ' , , powerotherwise y y y x x r r x y P P EP R EP w r x y x ⊕ ∈ ∈ > ⎧ ⎪ = ⎨ ⋅ ⋅ ⎪ ⎩ O R Equation 9. Expanded preference vector Which is equivalent to equation 6 where EC, C and w have been replaced by EP, P and w’, and thevariablethasbeenremoved,sincelong-termpreferencesaretakentobestablealonga singlesession.Also,followingtheinsightsintroducedinsection4.2.Thefinalexpanded preferencesarecomputedintwoexpansionphases:firstlyanexpansionofthepositive preferencesEP y + and,secondly,anexpansionofthemappingofnegativepreferencesEP y - , calculating the final value as subtraction of the negative expanded preferences (dislikes) to the expanded positive preferences (interests): EP y = EP y + -EP y - Equation 10. Final Expanded preference vector estimation. 4.7Contextual Activation of Preferences Afterthecontextisexpanded,onlythepreferredconceptswithacontextvaluedifferentfrom zerowillcountforpersonalization.Thisisdonebycomputingacontextualpreferencevector CP,asdefinedbyCP x =EP x ·C x foreachx∈O,whereEPisthevectorofextendeduser preferences. Now CP x can be interpreted as a combined measure of the likelihood that concept x is preferred and how relevant the concept is to the current context. Note that this vector is in fact dependent on user and time, i.e. CP(u, t). This can be seen as the intersection value of expanded preferences and context (see Figure 4.5). 78 Note Sectio P x (u,t expan 4.8 Final given M(d)∈ comb the do 14 Not i.e.at includ sim(d F also that at on4.3,nam t)onlywhe nsion mechan Contex lly,givena ninstanttin ∈[0,1] |O| ist bined with th ocument: te that in orde tt–1.Thism dedinthecon d, q), and agai Figure 4.5. Se this point w melyΦ(P(u),C enEP x (u)h nism, and CP tualizati documentd∈ nasession thesemantic he query-dep er to contextua meansthatth ntext.Otherw in via prm(d, emantic inter we have achie C(u,t))=CP hasbeende P x (u,t) < EP x ion of Pr ∈D,theper iscomputed metadataco endent, user alize user pref helastrequest ise,thelastre u, t). P rsection betw eved a conte P(u,t),where rivedfrom P x (u). referenc rsonalreleva dasprm(d, oncept-vecto -neutral rank ferences in the t(theonebei equestwould Personalizat ween prefere extual prefere eP(u)|=Φ( P(u)throug es ancemeasure u,t)=cos orofthedoc k value, to pr e instant t, the ingcurrently contributetw tion in Cont nces and con ence mappin (P(u),C(u,t)) ghthecons ePRMofd (CP(u,t– cument 14 .Th roduce the fi e context is tak respondedby wicetotheran text — Chap ntext ng Φ as defin ,sinceCP x ( strainedspre d forauseru 1),M(d)),w hisprmfunct inal rank val ken one step e ythesystem) nking:onceth pter 4 ned in u,t)> eading uina where tionis ue for earlier, isnot hrough Chap The respe perso query 4.9 The a andt techn acqui user perm betwe appli purpo conte Conte conte consi drift. Inge maint pter 4 — Pe similaritym ecttoaquer onalizedbias y and content Long an approach intr thelong-term nique,weac isition appro profileand manent,orha eenandwit cation.Whi oseofcontex extaretake extualized Pr extualization ideredas“sl Figure eneral,propo taintwo(or ersonalizatio scor measuresim( ryorreques sintoanyra t based algor nd Short roduced in th mprofilesho cknowledge oaches. Our p theuserc avelong-term thinsessions lepreferenc xtistoalter enintocon reference va ofpreferen lotsofconte 4.6. Charact osalsthatta rmore)diffe on in Conte re (d, q, u, t) d,q)stands t.Ingeneral ankingtechn rithms: keyw t Term I his work trie ouldbeacqu thatprofile personalizati ontextand mupdatemo s.Contexth cesaremean rtheseprefe nsiderationin alue CP(u, t) nces,ascon exts”.Figure terization of acklethedi erentinterest ext ) = f (prm (d, sforanyran l,thecombi niquethatc word search, q Interests es to make a uired.Althou learningtec ion approach treatsthem odules,user hereishasa ntforexplo erencesinsu nthisexpl with the tim ntextualized 4.6depicts concept drift ifferencesbe trepresentat , u, t), sim (d nkingtechni inationabov computessim query by exa s in the S clear distinc ughweonly chniquesdo h uses two di differently. contextisfa aclearlydis oitationinth uchawayth oitationpha me variable in conceptske thisconcept ft in the conte etweenshort tions.Howev d, q)) iquetorank vecanbeus m(d,q),such ample, metad State of t ction betwee tacklethec differinna ifferent repre .Whileuse armoredyn stinctionto hepersonali hatonlypref ase.Thede ndicates the eepchanging tofcontextu extualization tandlong-t ver,theyusu kdocuments sedtointrod hasthecom data search e the Art n how the co contextacqui aturewithco esentations f erpreference amicandch preferences izationphas ferencesrela ependenceo drift nature ginwhatc ualizedprefe n algorithm terminteres uallydon'th 79 swith ducea mmon etc… ontext isition ontext for the esare hanges inits e,the atedto ofthe of the anbe erence talso havea 80Personalization in Context — Chapter 4 cleardistinctionbetweenthistworepresentations,eitherbynotdifferentiatingtheacquisition techniques(Ahn et al. 2007; Billsus and Pazzani 2000), or by applying them indistinctly during the exploitation phase (Ahn et al. 2007; Sugiyama et al. 2004). Earlywork,suchastheAlipespersonalizednewsagent(Widyantoroetal.1997),already tackledthedifferencesonexploitingandacquiringuserlongandshort-terminterests.The context was obtained only from the actual session’s feedback, while the user profile was learnt from all the past historical feedback. The system takes into consideration negative feedback, but only for the context representation. A final profile is created by basic combination of the short- termandlong-termprofile,althoughoneorotherprofilecanbegivenmoreorlessweight dependingonhowmuchhasthesystemhasalreadylearntabouttheuser.Forinstance,the system can give more weight to the short-term profile when there are few learned documents in the long-term profile. Sugiyama et al. (2004) also applied a combination of the short and long- termprofiles.Theyacquiredthreedifferentprofiles:along-termprofile,ashort-termprofile and a session-based profile. The three where obtained from the same term extraction technique, butweremaintaineddifferently:thelong-termprofilehadaforgettingfunction,where preferenceswerediscardedasthedayspassed,theshort-termprofilewasobtainedfromthe current day sessions, and was combined into the session based profile, obtained from the current session, giving a higher weight to the latter. Othersystemsdonotcombinetheprofileinanyform,butapplytheexactsameacquisition technique.BillsusandPazzani(2000)andAhnetal.(2007)applytheexactsameextraction techniquetocreateboththeirlong-termprofileandshort-termprofile.Theyonlylimitthe numberofdocuments(fromthebrowsinghistory)tobeusedasinputinordertocreatethe short-term profile: 100 and 20 document respectively. Regarding the exploitation phase, Billsus and Pazzani apply either the short or the long-term profile depending on which is more similar to the current opened document. Ahn et al. lets the user select which profile to apply. ShenandTanapplythesamelanguagemodelapproachtotwodifferentsystems,a personalization system (Tan et al. 2006), and a context-aware system (Shen et al. 2005a). Both systemscreatealanguagemodelbasedonthepastclickthroughhistory,limitedtothecurrent sessionforthecontext-awaresystem.Oncethelanguagemodelsarecreated,bothcanbe incorporatedtothequery,inordertotakeintoconsiderationtheinterestsoftheuser,her/his current context, or both. Chapter 4 — Personalization in Context81 4.10An Example Use Case Asanillustrationoftheapplicationofthecontextualpersonalizationtechniques,considerthe followingscenario:Sue’ssubscribedtoaneconomicnewscontentprovider.Sheworksfora mayorfoodcompany,soshehaspreferencesfornewsrelatedtocompaniesofthissector,but she also tries to be up-to-date in the technological companies, as her company is trying to apply the latest technologies in order to optimize the food production chain. Sue is planning a trip to Tokyo and Kyoto, in Japan. Her goal is to take ideas from different production chains of several Japanpartnercompanies.ShehastodocumentaboutdifferentcompaniesinJapan,soshe accesses the content provider and begins a search session. Let us assume that the proposed framework has learned some of Sue’s preferences over time or Sue has manually added some preference to the system, i.e. Sue’s profile includes the weighted semantic interests for domain concepts of the ontology. These include severalcompanies from thefood,beverageandtobaccosectorandalsoseveraltechnologicalcompanies.Onlythe relevant concepts have been included and all the weights have been taken as 1.0 to simplify the example. This would define the P(u Sue ) vector, shown in Table 4.2 . P(u Sue ) Yamazaki Baking Co.1.0 Japan Tobacco Inc.1.0 McDonald's Corp.1.0 Apple Computers Inc. 1.0 Microsoft Corp.1.0 Table 4.2.Example of user Profile: Sue's preferences In our approach, these concepts are defined in a domain ontology containing other concepts and the relations between them, a subset of which is exemplified in Figure 4.7. 82Personalization in Context — Chapter 4 Japan Yamazaki Baking Japan Tobacco Kyoto Tokyo locatedIn locatedIn subRegionOf activeInSector subRegionOf Makoto Tajima within Organization Apple Inc. McDonald’s Corp activeInSector Microsoft Corp. activeInSector activeInSector Microsoft Apple activeInSector ownsBrand ownsBrand McDonald’s ownsBrand Food, Beverage &Tobacco Technology Figure 4.7. A subset of domain ontology concepts involved in the use case. ThepropagationweightmanuallyassignedtoeachsemanticrelationisshowninTable4.3. Weightswereinitiallysetbymanuallyanalyzingandcheckingtheeffectofpropagationona list of use cases for each relation, and was tuned empirically afterwards. Investigating methods for automatically learning the weights is an open research direction for our future work. Relation r w(r) w(r --1 ) locatedIn0.00.6 withinOrganization0.50.5 subRegionOf0.60.9 activeInSector0.30.8 ownsBrand0.50.5 Table 4.3. Example of propagation of weights through semantic relations. WhenSueentersaqueryq 1 (thequery-basedsearchenginecanbeseenessentiallyasablack box for our technique), the personalization system adapts the result ranking to Sue’s preferences bycombiningthequery-basedsim(d,q)andthepreference-basedprm(d,u Sue )scoresforeach document d that matches the query, as described in Section 3.2. At this point, the adaptation is Chapter 4 — Personalization in Context83 not contextualized, since Sue has just started the search session, and the runtime context is still empty (i.e. at t = 0, C(u, 0) = ∅). Butnowsupposethattheneedofinformationexpressedinq 1 issomehowrelatedtothe concepts Tokyo and Kyoto, as Sue wants to find information about the cities she’s visiting. She opensandsavessomegeneralinformationdocumentsaboutthelivingandeconomicstyleof these two cities. As a result, the system builds a runtime contextout of the metadata of the selected documents andtheexecutedquery,including theelementsshowninTable4.4.ThiscorrespondstotheC vector (which for t = 1 is equal to Req(t)), as defined in section 4.4. C(u Sue ,t 1 ) Tokyo1.0 Kyoto1.0 Table 4.4.Example of Context vector Now,SuewantstoseesomegeneralinformationaboutJapanesecompanies.The contextualization mechanism comes into place, as follows. 1.First, the context set is expanded through semantic relations from the initial context, addingmoreweightedconcepts,showninboldinTable4.5.ThismakesuptheEC vector, following the notation used in Section 4.5. 84Personalization in Context — Chapter 4 EC(u Sue , t 1 ) Tokyo1.0 Kyoto1.0 Japan0.88 Japan Tobacco Inc.0.55 Yamazaki Baking Co.0.55 Food, Beverage & Tobacco (F,B & T) Sector0.68 Person:’Makoto Tajima’0.23 Table 4.5.Example of expanded context vector 2.Similarly, Sue’s preferences are extended through semantic relations from her initial ones,asshowinsection4.6.TheexpandedpreferencesstoredintheEPvectorare shown in Table 4.6, where the new concepts are in bold. EP(u Sue ) Yamazaki Baking Co.1.0 Brand:'Microsoft'0.5 Japan Tobacco Inc.1.0 Brand:'Mcdonald's'0.5 McDonald's Corp.1.0 Person:’Makoto Tajima’0.5 Apple Computers Inc.1.0 Brand:'Apple'0.5 Microsoft Corp.1.0 F, B & T Sector0.70 Table 4.6.Example of extended user preferences 3.Thecontextualizedpreferencesarecomputedasdescribedinsection4.7,by multiplyingthecoordinatesoftheECandtheEPvectors,yieldingtheCPvector shown in Table 4.7 (concepts with weight 0 are omitted). Chapter 4 — Personalization in Context85 CP(u Sue ) Japan Tobacco Inc.0.55 Yamazaki Baking Co.0.55 F, B & T Sector0.48 Makoto Tajima0.12 Table 4.7. Example of contextualized user preferences Comparing this to the initial preferences in Sue’s profile, we can see that Microsoft, Apple and McDonald’saredisregardedasout-of-contextpreferences,whereasJapanTobaccoInc.and YamazakiBakingCo. havebeenaddedbecausetheyarestronglysemanticallyrelatedbothto the initial Sue’s preferences (food sector), and to the current context (Japan). Figure 4.8 depicts the whole expansion and preference contextualization process regarding the presented use case. Japan Yamazaki Baking Japan Tobacco Kyoto Tokyo locatedIn locatedIn subRegionOf activeInSector subRegionOf Makoto Tajima within Organization Apple Inc. McDonald’s Corp activeInSector Microsoft Corp. activeInSector activeInSector Microsoft Apple activeInSector Initial user preferences Extended user preferences Contextualized user preferences Initial runtime context Extended context ownsBrand ownsBrand McDonald’s ownsBrand Food, Beverage &Tobacco Technology Figure 4.8. Visual representation of the preference contextualization. 4.UsingthecontextualizedpreferencesshowedTable4.7,adifferentpersonalized ranking is computed in response to the current user query q 2 based on the EC(u Sue , t 1 ) vector, instead of the basic P(u Sue ) preference vector, as defined insection 4.7. 86Personalization in Context — Chapter 4 Thisexampleillustrateshowourmethodcanbeusedtocontextualizethepersonalizationina query-basedcontentsearchsystem,wherethequeriescouldbeofanykind:visual,keyword- based,naturallanguagequeries.Theapproachcouldbesimilarlyappliedtoothertypesof content access services, such as personalized browsing capabilities for multimedia repositories, automaticgenerationofa personalizedslideshow,generationofpersonalizedvideosummaries (wherevideoframesandsequenceswouldbetreatedasretrievalunits),etc. Chapter 5 5 Experimental work Evaluatinginteractiveandpersonalizedretrievalenginesisknowntobeadifficultand expensivetask(WilkinsonandWu;YangandPadmanabhan2005).TheclassicIRevaluation model,knownasCranfield-styleevaluationframework(Cleverdonetal.1966)specifiesan evaluation framework with 1) a document collection, 2) a set topic descriptions that express an information need and 3) explicit relevance assessments made by those who generated the topics descriptions.Theonlysourceofuserinformationoftheclassicmodelisthetopicdescription, thustheevaluationframeworkneededtobeextended.Themostcommonextensionisadding theinteractionmodelbetweentheuserandtheretrievalsystem(Borlund2003;Thomasand Hawking2006;White2004a;WhiteandKelly2006).Theevaluationisthenmadefollowing theinteractionmodelgivenatopic,whichiscalledtheuser'stask.Thisinteractionmodelis generally obtained from real user interaction with the users (explicit) or from query log analysis (implicit). 5.1EvaluationofInteractiveInformationRetrievalSystems: An Overview We can broadly group user evaluation approaches by those that use real users, what we will call usercenteredapproaches,andthosethatdonotinteractdirectlywithusers,ordatadriven approaches.Insidethesetwoapproachesthereisabroadspectrumofevaluationtechniques. Usercenteredapproachesincludeuserquestionnaires(Dumaisetal.2003;MartinandJose 2004;Whiteetal.2005a),side-by-sidecomparisons(ThomasandHawking2006),orexplicit relevance assessments (Finkelstein et al. 2001).Data driven approaches normally exploit query loganalysis(Douetal.2007;ThomasandHawking2006)ortestcollections(Shenetal. 2005a).Thefollowingsubsectionswillsummarizethemaincharacteristicsofinteractive retrieval evaluation. Table 5.1 is a brief classification of the studied techniques. 88Experimental Work — Chapter 5 Table 5.1. Summary of complete evaluation systems 5.1.1User Centered Evaluation Borlund suggested an evaluation model for IIR (Interactive Information Retrieval) centered on userevaluation(Borlund2003).TheauthorproposedasetofchangestotheclassicCranfield evaluation model: •The topics or information needs should be more focused on the human evaluators, who shoulddevelopindividualandsubjectiveinformationneedinterpretations.Borlund suggestedthesimulatedsituation,whichiscomposedbyasimulatedworktask situation: a short “cover story” that describes the situation (i.e. context) that leads to an individualinformationneed,andanindicativerequest,whichisashortsuggestionto the testers on what to search for. A simulated situation tries to trigger a subjective, but simulated,informationneedandtoconstructaplatformagainstwhichrelevancein context can be judged. An example of a simulated situation can be found in Figure 5.1. REFERENCECORPUSTASKASSESMENTSMETRIC (Ahn et al. 2007)WebFixedFixedP@N (Budzik and Hammond 1999,2000)WebFixed ExplicitP@N (Dou et al. 2007)WebFreeImplicitRank scoring (Dumais et al. 2003)DesktopFreeNoneQuestionnaire (Finkelstein et al. 2001)WebFixedExplicitNon-standard (Kraft et al. 2006)WebFixedExplicitP@N (Leroy et al. 2003)CorpusFizedFixedPR (Martin and Jose 2004)WebFreeNone Questionnaire (Rhodes and Maes 2000)WebFixedExplicitQuestionnaire (Shen et al. 2005b)WebFixedExplicitP@N (Shen et al. 2005a)CorpusFixedFixed, corpusMAP (Sugiyama et al. 2004)WebFixedExplicitR-Precision (Thomas and Hawking 2006)WebFreeExplicit/ImplicitSide-by-side (Vishnu 2005)WebFixed UsersNon-standard Chapter 5 — Experimental Work89 •Theevaluationmetricsshouldmeetthesubjectiverelevanceoftheusers.Borlund suggested the Retrieval Relevance (RR) measure, which adds an additional component of subjective relevance. The RR is composed of two relevance values, one regarding the relevance of the document to the topic (which can be calculated over a prefixed corpus) andasubjectiverelevance,givenbytheuser,takingintoconsiderationthesimulated situation. Borlund states that the classic binary relevance punctuation, where documents are evaluated as relevant or not relevant, is not sufficient for IIR evaluation, where the contextualrelevancehastobetakenintoconsideration.Thus,theRRhasmultiple degree relevance assessments. This tendency can be also observed in other publications (Finkelstein et al. 2001; Järvelin and Kekäläinen 2000; Kraft et al. 2006)as well as in the interactive text retrieval evaluation community (Allan 2003). Figure 5.1. Example of a simulated situation. Someusercenteredevaluationsapproaches(Ahnetal.2007;Dumaisetal.2003;Martinand Jose2004;RhodesandMaes2000)usetestquestionnairesandpersonalinterviewsbeforeor afterthehumantesterinteractswiththeretrievalengine.Forinstance,theevaluationofthe StuffI’veSeenapplication(Dumaisetal.2003)includespreviousandposteriorsearch questionnaires, filled by the users before and over a month later they were interacting with the application.Thequestionnairehadbothquantitative(e.g.“Howmanytimesdidyouusethe systemyesterday”)andqualitative(e.g.“Doyoufindquicklywebpagesthatyouhavevisited before?”) questions. The responses were then compared with the pre-usage ones. Questionnaires givemoredirectfeedbacktotheevaluators,butrequirequiteeffortinordertoretrieveand analyze the test results, interviews are also highly time consuming. Rhodes and Maes (2000) use post-questionnaires to evaluate their proactive system once the user has finished a more or less Simulated situation Simulated work task situation After your graduation you will be looking for a job in industry. You want information to help you focus your future job seeking. You know it pays to know the market. You would like to find some information about employment patterns in industry and what kind of qualifications employers will be looking for from future employees. Indicative request Find,forinstance,somethingaboutfutureemploymenttrendsinindustry,i.e.,areasof growth and decline. 90Experimental Work — Chapter 5 fixedtask:writinganessayaboutatopic.MartinandJose(2004)andAhnetal.(2007)use post-questionnaires to complement their users’ interaction log analysis. Another approach for user centered evaluation is using the testers to create the interaction model with the system and then evaluate the system with a quantitative metric (see section 5.1.3). The interactionmodelcanbebasedonpresentedtopics(i.e.simulatedsituations)oronafree interactionoftheuserswiththesystem.Inthemajorityofuser-centeredevaluations(Budzik andHammond2000;Kraftetal.2006;Shenetal.2005b;Sugiyamaetal.2004)evaluations, the users have to perform a set of fixed tasks. For instance, Shen et al. (2005b) used the topics defined in the TREC interactive evaluation track 15 (see section 5.1.4). For each topic, the users perform a free number of queries, in what is known a search session. At the end of the process the users will explicitly judge the relevancy of each search result (results can be also mixed with onesfromthebaselinesearchsystem).Theresultsetfromthefinaliterationwillbethen evaluatedandcomparedagainstabaselineresultset.ThebaselineisnormallyaWebsearch engine, since usually the system acts as a proxy of a popular search engine. Leroy et at. (2003) usetheTRECad-hoccorpus,topics,andassessmentsfortheirquantitativeevaluation,using PrecisionandRecallmeasures.Vishnu(2005)hasusersinteractingwithdesktopapplications, as this is how the system gets session related information from the user. The users perform tasks that stimulate an interaction with the desktop environment, like writing an essay. At the end of theexperiment,theuserstrythecontextualizationofaWebsearchengineandrankthe relevance of each result. Finally, the mean relevance metric is compared against the unordered results. Regardingfreeinteractionevaluation,ThomasandHawking(2006)presentedaside-by-side evaluation schema capable of collecting implicit feedback actions and of evaluating interactive searchsystems.Thisevaluationschemaconsistedinatwopanelsearchpagewithonequery input, showing the results for the input query for both the baseline and evaluation system. Thus, the users interacted with both the two search engines to be evaluated. The systemwas capable ofcollecting implicitfeedbackmeasures,i.e.tocreateaninteractionmodelforinteractiveand context-aware system evaluations. In order to perform the quantitative measures, the users have to provide relevance assessments for the result returned by the evaluated system. The relevance assessments can be collected by implicitorexplicitfeedbacktechniques.Explicitrelevanceassessmentsrequireaconsiderable amount of effort from the testers. Trying to palliate this, some evaluations base the assessments 15 Evaluation track for interactive IR systems: http://trec.nist.gov/data/interactive.html Chapter 5 — Experimental Work91 onimplicitfeedbacktechniques,wheretherelevanceofaresultisinferredautomatically monitoringtheuserinteraction(Douetal.2007;ThomasandHawking2006).Thesame indicator of implicit relevance used for user profiling or context acquisition (seesection 2.1.2) can be used for this type of implicit evaluation. The time the user spent viewing the content, the amount of scrolling or if the user saved the results are some of the relevance indicators that can beexploited. TheuseofImplicitfeedbackforthegenerationofrelevancejudgmentshasbeen proventohaveaconsiderableaccuracy,ThomasandHawking(2006)wereabletoprovethat implicit relevance judgments were consistent with explicit feedback evaluations the 68-87% of the times. 5.1.2Data Driven Evaluation Datadrivenevaluationsconstructtheinteractionwithoutanexplicitinteractionofahuman subject.Theadvantageofadatadrivenevaluationisthat,oncethedatasetcollectionis prepared,thecorpuscanbeeasilyreusedforothertestevaluations.Oneofthe mainproblems evaluators face is when the Web is used as document corpus. The Web is highly dynamic, thus theevaluationhastobeascloseaspossibletothetimetheinteractionmodelwascreated,as this has a strong dependency with the search results that were displayed to the user. Withtheideaofeasingtheconstructionofinteractiveevaluationcorpus,Whitepresenteda model for an automatic creation of interaction models (White 2004a). The model could take into considerationparameterslikehowmanyrelevantdocumentsarevisitedorthe“wondering” behavior. These models are optimized with real user information. Another solution is to exploit the query logs of the interaction of users with a publicly available searchengine,bymeansofsearchproxies.Theinformationofthequerylogscangofromthe clickthrough data to the documents opened by the user, the time spent viewing a document, or the opened applications. Dou et al. (2007) analyzed a large set of query log history in order to evaluatedifferentpersonalizationandcontextualizationapproaches.Thisloghistory,obtained fromtheMSNsearchengine 16 ,hasinformationlikecross-sessionuserid(throughcookie information),theuser’squeriesandtheclickthroughinformation.Theauthorsarethenableto implicitlystatewhatresultsarerelevantfortheusers,bygloballyanalyzingthequeryand clickthroughinformationofabigsetofusers.Inshortwords,theauthorscompensatethe uncertaintygivenbyimplicitrelevanceassessmentswithalargebaseofusers.Extractingthis relevancecriterionfromthesocalledtrainingset,theauthorscanthenevaluatethe 16 http://www.msn.com 92Experimental Work — Chapter 5 effectiveness of the different approaches by calculating the precision of the reordered result set by each personalization algorithm. 5.1.3Evaluation Metrics TherearenumerousevaluationmetricsintheliteratureforclassicIRevaluation,whichare largely appliedby IIR authors (Budzik and Hammond 1999; Budzik and Hammond 2000; Shen etal.2005a;Shenetal.2005b;Sugiyamaetal.2004).Thesemetricsrelyinrelevance assessments for the returned results. Whereas in classic evaluation the relevance assessments are basedontherelevancyoftheresultforthedocument,theauthorsofIIRsystemsincorporate human subjective assessments, either implicitly, analyzing interaction logs or explicitly, asking the users to rate the results (relevance based metrics) or provide a “best” order (ranking position based metrics). One of the most known metric based on relevance in the IR community is Precision and Recall (PR)(Baeza-YatesandRibeiro-Neto1999),thismetrictakesintoconsiderationtheratioof relevantdocumentsretrieved(precision)andthequantityofrelevantdocumentsretrieved (recall), computing a vector of values at 11 fixed percentage points of recall (from 0 to 100% of relevant documents retrieved, with increments of 10%). Precisionatn(P@N),orcutoffpoints,isanothermetricquitecommoninIIRevaluation (Budzik and Hammond 2000; Kraft et al. 2006; Shen et al. 2005b). P@N is the ration between thenumberofrelevantdocumentsinthefirstnretrieveddocumentsand n.TheP@Nvalueis more focused on the quality of the top results, with a lower consideration on the quality of the recall of the system. Thisis one of the main reasons why this value is normally considered on contentrecommendersystems.Otherstandarddefactometricisthemeanaverageprecision (MAP).Meanaverageprecisionistheaverageforthe11fixedprecisionvaluesofthePR metric, and is normally used for system’s performance comparison. Shen et al. (2005a) use this evaluation metric for their system. Other authors use non-standard (or uncommon) metrics, like the number of relevant documents in the first 10 results (Finkelstein et al. 2001), or the average oftheexplicitassessmentoftheuser,giveninvaluesbetween1and5(Vishnu2005).In rankingpositionmetrics,usersinspectthereturnedresultssetandindicatewhatcouldhave been the best subjective order. Finally a measure of distance (e.g. K-distance) is used in order to calculate the performance of the retrieval system. Some authors have focused in new metrics for IIR evaluations. Borlund adapted classic metrics to add the concept of relevancy to the context and multi degree relevance assessments (Borlund 2003).TwoofthemetricspresentedweretheRelativeRelevance(RR)andtheHalfLife Chapter 5 — Experimental Work93 Relevance (HLR), which added multi-valuated relevance assessments and was based on vectors oftworelevancevalues,oneconcerningthealgorithmitself(classicranking)andthesecond considering the subjective relevance given by the user. Järvelin & Kekäläinen created two new evaluation metrics, CG (Cumulative Gain) and DGN (Discount Cumulative Gain) (Järvelin and Kekäläinen2000).thesetakeintoconsiderationthepositionoftherelevantdocument,giving moreimportancetohighlyrelevantdocuments(i.e.relevanttotopicandusercontext)that appearinthefirstpositions,asthosearethemorelikelytobenoticedbytheuser.Inorderto validate their implicit evaluation methods, Dou et al. (2007) used rank scoring and average rank metrics,whichhavetheirorigininrecommendersystemevaluations.Themetricsmeasurethe qualityofaresultlistpresentedbyarecommendationsystem.Bothmetricsarecomputedby using the clickthrough data and the position of the documents in the result set. 5.1.4Evaluation Corpus An evaluation corpus is extremely hard to prepare for an evaluation process. This is true in IR evaluation and even more difficult to achieve in IIR evaluation, as the interaction model has to beincorporated,eitherfixedandalongwiththecorpus(Shenetal.2005a)orastopic descriptionthatcan“guide”theusertoaninteractionwiththesystem(Borlund2003).Most authors prefer to either have a free corpus (the Web, using search proxies or local repositories as Desktop files), or either use or adapt already existent corpus from the IR community (Shen et al. 2005a), like corpus from the TREC conference. TheTExtRetrievalConference(TREC) 17 isanannualinternationalconferencewiththe purposeofsupportingthelargescaleevaluationinsidetheInformationRetrievalcommunity. Eachyear,theconferenceconsistsinasetoftracks,whichfocusinaspecificareaof information retrieval. The normal procedure for participating in a TREC conference is selecting atrack,obtainingthecorpusassignedtothetrack,evaluatingthelistoftopics(i.e.search queries) on the retrievalengine and sending the results to the task committee, where the result willbeevaluated.Therelevancejudgmentsareobtainedwithapollingmethod:thefirst100 result for every topic of every submitted search engine is evaluated, instead than evaluating the wholecollection,whichgiventhesizeofmostofthedatasets(somegoingover20M documents) it’s simply not feasible. TRECInteractiveTrackresearchestheuserinteractionwithretrievalengines.Thetopic executionismadebyhumanusers,anddifferentinteractionswiththesystemareperformed, 17 http://trec.nist.gov/ 94Experimental Work — Chapter 5 likequeryreformulationorexplicitfeedback(dependingontheprotocolofeachyear).The interactions, however, are not stored or added to the evaluation framework. Thus, the results are notasuseful,astheylackoftheinteractionmodeldata.Thereisalsoaninteractivetaskfor video retrieval, TRECVID, in its origin a track of TREC and now an independent conference. In theinteractivetask,theexperimentercanrefinequeriesandmodifytherankedresultlistafter the initial query. The Interactive search still performs significantly better than the automatic or manual query tasks. The HARD Track (Allan 2003) investigates the knowledge of the user’s context in information retrievalsystems.ThetopicsareexpressedsimilarlytootherTRECtracks,exceptthat additionalmetadataonthe“context”oftheuserisgiven.Thismetadataincludesthedesired genre of the returned results, the purpose of the search or a specification of geographic focus of documents.Therelevancejudgmentstakeintoconsiderationthecontextdescribedinthe metadata, judging a document as non relevant (to the topic), soft relevant (i.e. on topic but not relevant to the context) and hard relevant (both relevant to topic and context). Shenetal.(2005a)adaptedaTRECdataset,taking advantagethattherewerealreadyrelevant assessmentsandtopicdefinitionsforthedocumentdatabase.Asthiscorpuslackedofan interactive model, the authors constructed one by monitoring three human subjects searching for the evaluation topics. 5.2Our Experimental Setup Thecontextualizationtechniquesdescribedintheprevioussectionshavebeenimplementedin anexperimentalprototype,andtestedonamedium-scalecorpus.Aformalevaluationofthe contextualizationtechniquesmayrequireasignificantamountofextrafeedbackfromusersin order to measure how much better a retrieval system can perform with the proposed techniques thanwithoutthem.Forthispurpose,itwouldbenecessarytocomparetheperformanceof retrievala)withoutpersonalization,b)withsimplepersonalization,andc)withcontextual personalization. This requires building a testbed consisting of: a document corpus, a set of task descriptions,asetofuserprofiles,therelevancejudgmentsforeachtaskdescription,andthe interaction model, either fixed or provided by the users. Thedocumentcorpusconsistsof145,316documents(445MB)fromtheCNNwebsite 18 ,plus the KIM domain ontology and knowledge base publicly available as part of the KIM Platform, 18 http://dmoz.org/News/Online_Archives/CNN.com Chapter 5 — Experimental Work95 developedbyOntotextLab 19 ,withminorextensions.TheKBcontainsatotalof281RDF classes,138properties,35,689instances,and465,848relations.TheCNNdocumentsare annotatedwithKBconcepts,amountingtooverthreemillionannotationlinks.Therelation weightsRandR -1 werefirstsetmanuallyonanintuitivebasis,andtunedempirically afterwards by running a few trials. Taskdescriptionswillbesimilartothesimulatedsituationexplainedinsection5.1.1,the goal of the task description is to provide the sufficient query and contextual information to the user toperformtherelevanceassessmentsandtostimulatetheinteractionwiththeevaluation system. SimilartotheTRECHardtrack(see5.1.4)wewilluserealusersthatwillprovideexplicit relevanceassessmentswithrespecttoa)queryrelevance,b)queryrelevanceandgeneraluser preference(i.e.regardlessofthetaskathand),andc)queryrelevanceandspecificuser preference (constrained to the context of her task) Theselectedmetricsareprecisionandrecallvaluesforsinglequeryevaluationandaverage precisionandrecall,meanaverageprecisionandaverageP@Nvaluesforwholesystem performance evaluation. Average precision and recall is the average value of the PR points for a set of n queries. PR values was chosen as it allows a finer analysis of the results, as the values can be represented in a graph and different levels of performance can be compared at once. For instance,aretrievalenginecanhaveagoodprecision,showingrelevantresultsinthetopfive documents,whereasthesamesearchsystemcanlackofagoodrecallperformance,asitis unable to find a good proportion of relevant documents in the search corpus, see Figure 5.2 for a visual description of the precision and recall analysis areas. The goal area to reach is the upper rightpart,asitwilldenotethatthesearchenginehasagoodprecisionoveralltheresultsand has achieved a good recall value of documents. Sincethecontextualizationtechniquesareappliedinthecourseofasession,onewayto evaluate them is to define a sequence of steps where the techniques are put to work. This is the approachfollowedinthefirstsetofexperiments,forwhichthetaskdescriptionsconsistofa fixedsetofhypotheticcontextsituations,detailedstepbystep.Thesecondsetofthe experiments,whichareusercentered,arefocusedintheoverallperformanceofthesearch engine and in the evaluation of the real interactions of users. In this case, the interaction model isprovidedbytheusers,followingthetaskdescriptions.Ontheonehand,thefirstsetof 19 http://www.ontotext.com/kim 96Experimental Work — Chapter 5 experimentswillgivemoredetailedandfinegrainedresults,inordertoallowanexhaustive analysisofthesystem’sproposalandoptionallygivethepossibilityofrefiningortuningthe parametersusedonoursearchsystem.Ontheotherhand,thesecondsetofexperimentswill give an idea on the real performance of the systems, when being used by real users. Precision area Recall area Optimal results 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Good precision performance Good recall performance Figure 5.2.Different areas of performance for a precision and recall curve. 5.3Scenario Based Testing: a Data Driven Evaluation Althoughsubjective,theseexperimentsallowmeaningfulobservations,andtestingthe feasibility, soundness, and technical validity of the defined models and algorithms. Furthermore, this approach allows a better and fine grained comprehension of the proposed system behavior, whichiscritical,duetothequantityofdifferentparametersandfactorthataffectthefinal personalization and contextualization process. 5.3.1Scenario Based Evaluation System Theretrievalsystemusedforthisexperimentisasemanticsearchengineco-developedbythe author (Castells et al. 2007), which did not implement itself any personalization capability. The retrieval system was adapted in a way that included: 1) a user profile editor, 2) personalization capabilities,3)thecontextmonitorandacquisitionengine4)thesemanticexpansionmodule and 5) the contextual personalization algorithm. Chapter 5 — Experimental Work97 The user is able to input a keyword search and what we call semantic or ontology-based search, usingtheKBrelations(Castellsetal.2007).Thecontextmonitorengineisbasedonsimple implicitfeedbacktechniques,extractingconceptfrominputqueries(eitherkeywordor semantic)andopeneddocument.Theextractedconceptsareappliedwithinatimelineof actions,asexplainedonsection4.4,whereauserinputqueryistheunitoftime.Finally,the semanticexpansionanduserprofilecontextualizationalgorithmsareapplied,usingthe contextualized preferences as input of the personalization algorithm (see section 4.7). The rest of this section will describe the User Interface (UI) of the system. Figure 5.3 shows the main window of the evaluation and retrieval system. Document viewer Result summaries Keyword query input Semantic query input Personalization slider Semantic query dialog Figure 5.3. Main window of the scenario based evaluation system The main input UI components of the system’s interface are explained below: •Keyword query input Theusercanusesimplekeywordqueriesinordertosearchthedocumentcorpus.Keyword search is done using Lucene, a java-based text search engine. Lucene was set up to the default parameters, using a common English list of stop words, with no stemming functionality. Inordertousethecontextacquisition,techniques,thekeywordsarematchedintoontological concepts using a simple match between concept label and keyword. This matching was slightly “fuzzified” allowing a Hamming distance of up to two. 98Experimental Work — Chapter 5 •Semantic query input Theuserswereabletoinputanontology-basedsearch,usingtheKBrelations(Castellsetal. 2007). The users could directly input a formal ontological query, such as RDQL 20 (with syntax similartoSQL,atypicaldatabasequerylanguage).Tofacilitatethesemanticquery construction, users can use a simple dialog that allows the interactive edition of RDQL queries, althoughnotthewholequeryspectrumcanbecreatedwiththisdialog.Thedialogallowsthe usertoselectwhichtypeofconceptislookingfor,addrestrictionstothepropertiesofthis concept, as well as to add restrictions on relations of this concepts with others from the KB. All the semantic topics used in the evaluated scenarios could be created with this editor. Optionally, theusercouldlaunchbothakeywordandasemanticquery,combiningtheresultsinasingle list. Figure 5.5 shows a snapshot of the query dialog UI for the generation of semantic queries. The figureisshowingthedialogthatwouldgeneratethesemanticquery“Companieslocatedin CentralEuropethattradeontheNewYorkStockExchange”.Figure5.4isasnapshotofthe dialog for the creation of complex relations, or restrictions. For instance, the property “located in Central Europe” is a complex relation as it has to be stated as “located in a country which is a region of Central Europe”. Figure 5.4is showing the state of the dialog when this restriction is input. 20 http://www.w3.org/Submission/2004/SUBM-RDQL-20040109/ Chapter 5 — Experimental Work99 Figure 5.4. UI for complex relations creation. Figure 5.5. Interactive dialog for semantic query generation •Personalization slider Users can adjust the level of personalization (or contextualized personalization) that they desire for the current result set. The personalization is dynamically adjusted, showing a live reordering of results as the users selects more or less degree of personalization. 100Experimental Work — Chapter 5 •Profile editor The profile editor allows selecting manually single concepts as user preferences. Exploiting the semantic description of the KB. Users are also able to create what we call complex preferences, i.e.preferencesbasedonrestrictions,like“Ilikeairlinecompanies”or“Idon’tliketech companiesthattradeontheNewYorkStockExchange”.Thusavoidinghavingtosearchand manuallyselectthoseconceptsaspreferences.Figure5.6showsthemaincomponentofthe profile editor UI. Ontology browser Concept Preferences Complex Preferences Figure 5.6.Concept profile editor UI The main output UI components are: •Results summaries Queryresultsareshownasalistofdocumentsnippets,showingthetextmostrelevanttothe queryforeachdocument.Agradientcolorbaratthenearrightofeachsnippetindicatesthe personalizationorpersonalizationincontextscoreofeachdocument.Conceptsthathave matcheduserpreferencesarehighlightedusingthefollowingcolorcode:a)conceptsthat matched positive interests of the users are highlighted with bold green, b) concepts that match a negativeinterestarehighlightedwithboldred.Userinterestsconceptsthatareresultofthe semanticexpansionalgorithmarehighlightedwithalightercolor:i.e.lightgreenorlightred Chap depen 5.7 sh •D When the m URL Addit docum •U With conte the re canc inform pter 5 — Ex ndingonifc hows several Document vi never the use main UI (see links.Text tionally,use ment contain User profile, thisinform extualization esults of the checkther mation panel xperimental conceptshav l examples o iewer er clicks a d Figure 5.3). isalsohigh erscanopen ns. Figure 5.7. , context and mationpanel, processes. U semantic exp resultantcon l. Work veafinalin of highlighted ocument sni The docum hlightedusin nalltheann . Examples o d semantic e userscane Users can ch pansion appr ntextualized ferredpositi d document s ippet, the res ment text has ngthesame notatedconc of document expansion inf explorethew heck their ori roach applied preferences iveornegati snippets. sults appear i been cleane colorschem cepts,andth snippets and formation p wholeseman iginal prefere d to preferen .Figure5.8 ivevalue,re in the large r d from HTM measthedo heirrelation d text highlig anel nticexpansio ence, the con nces and cont 8showsan spectively.F right text pa ML formattin ocumentsni weights,th ghting. onandprefe ntext concep text. Finally, exampleo 101 Figure anel of ng and ippets. hatthe erence ts and , users ofthis 102Experimental Work — Chapter 5 Figure 5.8.Contextual and personalization information dialog 5.3.2Scenario Based Evaluation Methodology OurexperimentalmethodologycanbeconsideredtobeanextensionofBorlund’ssimulated situation(seesection5.1.1)evaluationwithfurthercontextualanduserpreferenceindications. Wefixedtheuserinterestprofilesforthewholeexperiment,consistingonasetofweighted concepts,andcomplexpreferences,manuallyassignedbytheevaluators.Someofthese preferenceswerechosenbecauseoftherelationwiththetasksdescriptions.Thetask descriptionsconsistedofa)ashorttopic,indicatingwhenadocumentisrelevanttothelast inputquery,b)acontextualsituationdescription,indicatingwhenadocumentsadjuststothe actualcontextoftheuser,c)adescriptionpointingwhenadocumentisrelevanttotheuser preferences, and d) a step by step interaction model. The latter means considering sequences of useractionsdefineda priori,whichmakesitmoredifficulttogetarealisticuserassessments, sinceinprincipletheuserwouldneedtoconsideralargesetofartificial,complexand demandingassumptions.Figure5.9isataskdescriptionexamplespecificallyfortheusecase described in section 4.10. The rationale of this task description is a user who has a set of general preferences c). The user interactswiththeretrievalsystem,followingtheinteractionmodelindicatedbyd),atsome point,theuserexpressestheirnextinformationneedbymeansoftopica).Thisinteraction model,includingqueriesandvisiteddocuments,createsacontextualsituationinwhich Chapter 5 — Experimental Work103 documents returned by topic a) are relevant following the indications of b). The result relevance judgmentforeachresultinthefinalresultset,producedbytopica),isthatadocumentis relevant if it’s both relevant to the user preferencesc) and to the user’s current context b). Figure 5.9. Example of user centered task description. The final search result of the last query interaction, the topic query, is presented to the evaluator (role that we played ourselves) to provide the explicit relevance assessments of the whole result set.Thedocumentcorpus,thetaskdescriptions,thefixedinteractionmodelandtherelevance assessmentsfinallyformedafullcollectiontest fortheproposal. Thiscollectionwasstoredin such a way to facilitate reproduction of the evaluation and reuse of the test collection. Appendix A gives more details on the final set of ten task descriptions. 5.3.3Scenario Based Experimental Results Afinalsetoftentaskdescriptionswerecreatedaspartoftheevaluationsetup.Figure5.10a) shows the PR curve for the use case scenario described in section 4.10. This is a clear example wherepersonalizationalonewouldnotgivebetterresults,orwouldevenperformworsethan non-adaptive retrieval (see the drop of precision for recall between 0.1 and 0.4 in Figure 5.10 a), Task T: Japanese Companies: Food sector companies a)Topic Japan based companies b)Relevancy to context Relevantdocumentsarethosethatmentionacompanythathasorhashadanoperation basedinJapan.Thedocumenthastomention thecompany,itisnotmandatorythatthe article mentions the fact that the company is based in Japan. c)Relevancy to preferences Considerthatthearticleadjuststoyourpreferenceswhenoneofthementioned companies has a positive value in your user profile. d)Interaction model 1.Query input[keyword]: Tokio Kyoto Japan 2.Opened document: n=1, docId=345789 3.Opened document: n=3, docId=145623 104Experimental Work — Chapter 5 becauseirrelevantlong-termpreferences(suchas,intheexample,technologicalcompanies which are not related to the current user focus on Japan-based companies) would get in the way of the user. The experiment shows how our contextualization approach can avoid this effect and significantlyenhancepersonalizationbyremovingsuchout-of-contextuserinterestsand leaving the ones that are indeed relevant to the ongoing course of action. 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Figure 5.10. Comparative performance of personalized search with and without contextualization. Itcanalsobeobservedthatthecontextualizationtechniqueconsistentlyresultsinbetter performance with respect to simple personalization, as can be seen in the average precision and recalldepictbyFigure5.10b),whichshowstheaveragePRresultsoverthetenusecases. Figure5.11depictstheMAPhistogramcomparingthecontextualizedvs.non-contextualized personalization for the ten use cases. The Context bars compare personalized retrieval in context vs.retrievalwithoutpersonalization(i.e.thebaselinesystem),andthePersonalizationbars comparepersonalizedretrievalincontextvs.thebaseline.Wecanobservethatthe contextualizationapproachisconsistentlyoutperformingthepersonalizationapproach. Specifically, in use case 9 the personalization approach performseven worse than the baseline approach.Thisisbecausethereareasetofpreferencesthatboostresultsnotrelevanttothe current task at hand. The contextualization approach is able to filter out some of these irrelevant preferences, resulting in an improvement of the performance over both the personalization and the baseline approach. Appendix A provides more details on the individual performance of each evaluated scenario. Chapter 5 — Experimental Work105 Figure 5.11. Comparative mean average precision histogram of personalized search with and without contextualization 5.4User Centered Evaluation In this second approach, real human subjects are given three different retrieval task descriptions, similartothesimulatedsituationsdescribedbyBorlund(Borlund2003),eachexpressinga specificinformationneed,sothatusersaregiventhegoaloffindingasmanydocumentsas possible. 5.4.1User Centered Evaluation System For the evaluation of this system we used the same corpus and semantic search engine used in the scenario based evaluation (see section 5.3.1). Figure 5.12 shows the evaluation system UI. ‐0,20 0,00 0,20 0,40 0,60 0,80 1,00 1 2 3 4 5 6 7 8 9 10 A v e r a g e P r e c i s i o n ( M A P )   P e r s o n a l i z a t i o n / C o n t e x t Use case Number MAP Histogram Personalization On Personalization Off Context Personalization 106 Comp order snipp •Q Wel searc order also u •P The p allow indic likes conce pared to the rtofacilitat pets and docu Query input limitedtheq henginewa r to feed the used. Profile edito profile editor wed to inspec ating that the it, where 0 i epts that the Fig previous ev etheintera ument text hi t queryinput asagainLuc context acqu or r was simplif ct and rate th e user compl is being indi user was abl ure 5.12. Us aluation syst ctionwithr ighlighting. T tokeyword cenewithth uisition tech fied in order his predefined letely dislike ifferent to, o le to edit dur ser Centered tem (see Fig realusers.W The main sim dinputofth hesamepar hnique, the k to show a se d set of conc es the concep r ignoring, th ring the evalu Expe evaluation s gure 5.3), we Wedidnot mplifications hescenariob ametersast eyword to o et of predefin cepts. Prefere pt and 5 indic he concept. uation. erimental Wo system UI. e simplified t modifythe s affect the us basedevalua theprevious ntology conc ned concepts ence values g cating that th Figure 5.13 ork — Chap the retrieval outputdocu ser’s input: ation.Thet smethodolog cept mappin s. Users were go from -5 to he user comp shows a sub pter 5 UI in ument extual gy.In ng was e only o 5, -5 pletely bset of Chapter 5 — Experimental Work107 Figure 5.13. User preference edition. 5.4.2User Centered Evaluation Methodology Similarlytothescenariobasedevaluation(seesection5.3.2),weextendBorlund’ssimulated situation task descriptions. Each task description expresses a specific information need, so that usersaregiventhegoaloffindingasmanydocumentsaspossiblerelevanttothetask description. Differently from the scenario based task description, here the sequence of actions is notfixedbutisdefinedwithfullfreedomby usersastheyseektoachievetheproposed tasks. Therefore,thereisnoneedtoincludeaninteractionmodelinthetaskdescription.Thetask descriptioniscomposedby1)aparagraphindicatingwhenadocumentisrelevanttothe retrieval task, 2) anindication on how to consider a document relevant to the user preferences and 3) an example of relevant document to the task. A total of 18 subjects were selected for the experiment, all of them being PhD students from the authors’institutions.Threetasksweresetupfortheexperiment,whichcanbebriefly summarized as: 108Experimental Work — Chapter 5 1.Newsaboutagreementsbetweencompanies,seeFigure5.14foranexamplecomplete description of this task. 2.Presentations of new electronic products. 3.Information about cities hosting a motor sports event. Figure 5.14. Task description for task 1: News about agreements between companies. Eachtaskwastesteda)withcontextualpersonalisation,b)withsimplepersonalisation,andc) without personalisation. The task descriptions were assigned using a latin square distribution, in orderforusersnottorepeatthesametasktwiceormore.Eachofthethreemodeswereused withsixusers(3modes×6users=18testsforeachtask),insuchawaythateachusertried each of the three modes a, b, and c, exactly once. This way, each mode is tried exactly 18 times: onceforeachuser,and6timesforeachtask,insuchawaythatneithermodeisharmedor favoured by different task difficulty or user skills. Task 1: Agreements between companies 1)Relevancy to task Relevant documents are those that state an agreement between two companies, the article must name the two companies explicitly. For instance, articles about collaboration or an investment agreement between two companies are considered relevant. Agreements were one company buy totally or partially another company are NOT considered relevant. 2)Relevancy to preferences Considerthatthearticleadjuststoyourpreferenceswhenoneofthementioned companies has a positive value in your user profile. 3)Example of relevant document to the task (excerpt) CNN.com - Microsoft, AOL settle lawsuit - May. 30, 2003 Microsoft, AOL settle lawsuit Thetwocompaniesalsosaidtheyhavereachedawide-rangingcooperative agreement, under which they will explore ways to allow people using their competing instant message systems to communicate with each other. MicrosofthasalsoagreedtoprovideAmericaOnlinesoftwaretosomecomputer manufacturers. Chapter 5 — Experimental Work109 Usersneverknewifwhatofthethreemodestheywereusingforeachtasks.Modeswere labelledwithanonymouslabels(‘A’,‘B’and‘C’modesassignedtopersonalized, contextualizedandbaselinesystems,respectively)andgivenindifferentorder(seetask controllers on Figure 5.12). Userpreferencesareobtainedmanuallyfromtheusersbyaskingthemtoexplicitlyratea predefined list of domain concepts at the beginning of the session, using the simplified version oftheprofileeditor.Figure5.13showsasubsetofconceptsthattheuserhadtoexplicitly indicate. The relevant documents for each task, i.e. the relevancy for the topic, were marked beforehand byanexpert 21 (arolethatweplayedourselves),sothatusersarerelievedfromproviding extensive relevance judgements. However, users are encouraged to check the document snippets andtoopenthedocumentsthatseemmorerelevantaccordingtotheirsubjectiveinterests,in order to provide the system with more contextual tips, and to provide the users with more task information. Context information is gathered based on concepts annotating such selected results, andtheconceptsthatarerelatedtothekeywordsinuserqueries(usingthekeyword-concept mapping provided in the KIM KB). A typical task execution was as follows: 1.The user reads the task description 2.The user executes a keyword query. 3.The percentage of found documents over the whole set of relevant documents is shown to the user. 4.The user reviews the top result set summaries and examines those which seem to apply to the task and her preferences. 5.Iftheuserhasenteredatleasttreequeriesandthinksthathasachievedagood percentageofdocumentsrelevanttothetask,shecanpushthestopbuttonfinishthe task. If not, she returns to 2. At the end of every task, the system asks the user to mark the documents in the final result set as relatedorunrelatedtoherparticularinterestsandthesearchcontexttheuserfollowed.The users could choose between 4 points of relevancy: no relevant, somehow relevant, relevant and highlyrelevant.Asotherstudiespoint(Allan2003;Borlund2003;JärvelinandKekäläinen 2000), there is a need for multi graded relevance judgements in interactive and adaptive system 21 The relevance judgment corpus was created by a polling technique, where the first 100 documents of a number of queries where evaluated by ourselves and marked as relevant or not to the task description. 110Experimental Work — Chapter 5 evaluations, where there are often multiple criterions to decide the relevancy of a document. In ourexperimentsadocumentcanberelevanttothetask(whichisdecidedbeforehandby ourselves),relevanttothepreferencesoftheuser(chosenbytheusers)andrelevanttothe context(indicatedbytheinteractionoftheuserduringeachtaskandalsoevaluatedbythe users).Userswerethusencouragedtogivethehighlyrelevantrelevanceassessmenttothose documentswhichwereatthesametimerelevanttothepreferencesoftheuser,tothetopic description,andtothedifferentinteractionsthattheywereperforming.Figure5.15showsthe UI for users relevance assessments input. Figure 5.15.Relevance assessment UI Therelevanceassessments,togetherwiththemonitoredinteractionlogsarestoredinorderto calculatetheevaluationmetricsortoautomaticallyrecreatetheinteractionmodeloftheuser, allowing some level of reproduction and detail study of the experiments. Regarding the metric evaluation, two simplifications weremade for each interactive sequence (i.e. for each taskand user): •The search space is simplified to be the set of all documents that have been returned by the system at some point in the iterative retrieval process for the task conducted by this user. Chapter 5 — Experimental Work111 •Thesetofrelevantdocumentsistakentobetheintersectionofthedocumentsinthe searchspacemarkedasrelevantforthetaskbytheexpertjudgement,andtheones marked by the user according to her particular interests. 5.4.3User Centered Experimental Results This evaluation step was intended to give a generalview of the performance of each approach wheninvolvedwithrealusers.Onthisbasis,weselectedtheaveragePRandaverageP@N metrics in order to compare each evaluated technique. For the metric computation, we consider relevantthosedocumentsthatweremarkedbytheusersasrelevantorhighlyrelevant.The average values were obtained by calculating the PR and P@N values for every query interaction oftheuser.Forinstance,ifduringthetaskexecutiontheuserinputfivedifferentqueriesinto the system, each of the five results set were collected and PR and P@N values were calculated, based on the final relevance assessments given by the user at the end of the task execution. This PRandP@Npointswerethenaveragedacrossallusersandalltasks,groupedbyeachofthe three different evaluated approaches. Figure 5.16 shows the results obtained with this setup and methodology. The average precision and Recall curve on the left of this figure shows a clear improvement at high precision levels by thecontextualisationtechniquebothwithrespecttosimplepersonalisationandno personalisation.Thegraphicsshowa)theprecisionvs.recallcurve,andb)theP@Ncutoff points. The P@N curve clearly shows a significant performance improvement by the contextual personalisation,especiallyinthetop10results.Personalisationaloneachievesconsiderably lowerprecisiononthetopdocuments,showingthatthecontextualisationtechniqueavoids furtherfalsepositiveswhichmaystilloccurwhenuserpreferencesareconsideredoutof context. This validates our hypothesis that the contextualization approach is able to improve the precisionofapersonalizationsystem.Theimprovementofthecontextualizationapproach decreases at higher recall levels, corresponding to those preferences that were related to the task, butthecontextualalgorithmwasunabletomatchwiththeimplicittaskdescription,eitherby lackofimplicitinformationorbylackofthenecessaryKBrelationstoexpandthecontextto the concept preferences 112Experimental Work — Chapter 5 Figure 5.16. Comparative performance of personalized search with and without contextualization. Table5.2showsthemeanaverageprecisionvaluesforcontextual,simple,andno personalisationinthisexperiment,whichreflectsthatourtechniquegloballyperformsclearly above the two baselines. Retrieval modelMAP Contextual personalization 0.1353 Simple personalization0.1061 Personalization off0.0463 Table 5.2.Results on Mean Average Precision (MAP) for each of the three evaluated retrieval models. Most cases where our technique performed worse were due to a lack of information in the KB, as a result of which the system did not find that certain user preferences were indeed related to thecontext. Thisresultedonadecreaseoftherecallperformance.Allegedly, solvingthislack ofinformationoftheKB,orimprovingthesemanticintersectionofuserinterestanduser context,wouldresultonacomparablerecallperformancetothepersonalizationsystem(note thatimprovingrecallisnotpossible,asthecontextualizationtechniquedoesnotaddfurther information,justfilterspreferences)andevenahigherprecisionofthecontextualizedsystem, over the personalization approach. 1 10 100 1000 Cut Off Points 0,00 0,05 0,10 0,15 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,1 0,2 0,3 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Chapter 6 6 Conclusions and Future Work This thesis introduces a novel technique for personalized retrieval, where short-term context is taken into consideration, not only as another source of preference, but as a complement for the userprofile,inorderthataidstheselectionofthosepreferencesthatwillproducereliableand “in context” results. 6.1Summary and Achieved Contributions 6.1.1Personalization Framework Based on Semantic Knowledge Thisthesisproposesapersonalizationframeworkwhichexploitsanontologybased representationofuserinterests.Twoofthethreemainpartsofanypersonalizationframework have been addressed here: user profile exploitation and user profile representation. User profile learning alone constitutes a wide and complex area of investigation (Ardissono and Goy2000;Gauchetal.2007;WallaceandStamou2002),andisnotaddressedperseinthe scope of this work. The available achievements in that area are thus complementary and can be combined with the techniques proposed herein. See e.g. (Cantador et al. 2008) for an extension of the research presented here whee this has been in fact carried out. Theuserprofilerepresentationisbasedonontologicalconcepts,whicharericherandmore precise than classic keyword or taxonomy based approaches. Our personalization model has the main advantage of exploiting any type of relations between concepts, beyond just terms or topic based user profile representations, the latter used in typical classification based personalization systems. Theuserprofileexploitationlaysoverasemanticindex,byprovidingasemanticuser-content similarityscore,thePersonalRelevanceMeasure,whichrepresentsthelevelofsimilarity betweenasemanticuserprofileandthesemanticdocument(meta-)data.Thisallowsthe applicationofourtechniquestoanymultimediacorporacontainingannotationslinkingraw content to the ontology-based conceptual space where user preferences and semantic context are 114 Conclusions and Future Work — Chapter 6 modeled. The main benefits of our approach, which are novel to the current state of the art, are summarized as follows: -Trulyexploitationofconcept-baseduserprofiles:usingformalontologygrounding inordertoprovideunambiguousmatchingbetweenuserpreferencesandusercontent, user preference inference and complex user preference representation. -Atechniqueforsemanticuserprofileexploitation:basedoncontentstoredina semantic index and on concept space vector representation of user interest and content. 6.1.2Personalization in Context ContextisanincreasinglycommonnotioninIR.Thisisnotsurprisingsinceithasbeenlong acknowledgedthatthewholenotionofrelevance,atthecoreofIR,isstronglydependenton context—infact,itcanhardlymakesenseoutofit.SeveralauthorsintheIRfieldhave exploredsimilarapproachestooursinthesensethattheyfindindirectevidenceofsearcher interests by extracting implicit information from objects manipulated by users in their retrieval tasks (Shen et al. 2005a; Sugiyama et al. 2004; White et al. 2005a). Afirstdistinctiveaspectinourapproachistheuseofsemanticconcepts,ratherthanplain terms,fortherepresentationofthesecontextualmeanings,andtheexploitationofexplicit ontology-based information attached to the concepts, available in a knowledge base. This extra, formalinformationallowsthedeterminationofconceptsthatcanbeproperlyattributedtothe context, in a more accurate and reliable way (by analyzing explicit semantic relations) than the statisticaltechniquesusedinpreviousproposals,whiche.g.estimatetermsimilaritiesbytheir statistic co-occurrence in a content corpus. We have here proposed an approach for the automatic acquisition and exploitation of a live user context, by means of the implicit supervision of user actions in a search session. Our approach exploitsimplicitfeedbackinformationfromtheuser’sinteractionwiththeretrievalsystem, beingabletoconstructasemanticrepresentationoftheuser’scontextwithoutfurther interaction from the user. Our proposal is based on annotated content, so it can be applied to any type of multimedia system that identifies documents with a set of concepts. Anovelmethodforthecombinationoflong-termandshort-termpreferenceswasintroduced. This proposal is based on the semantic combination between user preferences and the semantic runtimecontext.Thesemanticcombinationisperformedbyadaptingaclassicconstraint spreadingactivationalgorithmtothesemanticrelationsoftheKB.Wehavepresentedhow conceptrelationscanbeexploitedinordertoachieveameaningfulconnectionbetweenuser contextanduserpersonalization.Experimentalresultsarepromisingandprovethatthis Chapter 6 — Conclusions and Future Work 115 technique could result on personalized retrieval systems taking a leap forward, by placing user interestsincontext.Thenoveltybroughtbyourapproachcanthusbesummarizedinthe following points: -Formalsemanticrepresentationoftheusercontext:enabling1)aricher representationofcontext,2)asemanticinferenceovertheusercontextrepresentation and 3) an enhanced comprehension of the user context through the exploitation of KBs. -Technique for semantic context acquisition: To the best of our knowledge, this is the first proposal of semantic context acquisition and construction, based on implicit feedback techniques. Our proposal also introduces a novel adaptation of an implicit ostensive model, in order to exploit content within a semantic index -Anovelsemanticexpansiontechnique:basedonanadaptationofConstraint Spreading Activation (CSA) technique to semantic KBs. -A novel concept of “personalization in context”:By means of a filtering process of userpreferences,bydiscardingthosenotrelatedtothecurrentliveusercontext.This results on a novel personalization and contextualization modelization approach that clearlydifferentiatespersonalizationandcontextualizationtechniques,differentiating the acquisition and exploitation process of user preferences and context. The benefit is twofold:thepersonalizationtechniquesgainaccuracyandreliabilitybyavoidingthe riskofhavinglocallyirrelevantuserpreferencesgettinginthewayofaspecificand focuseduserretrievalactivity.Inversely,thepiecesofmeaningextractedfromthe context are filtered, directed, enriched, and made more coherent and senseful by relating them to user preferences. 6.1.3User and Context Awareness Evaluation Ourproposalwasimplementedoverasemanticsearchengineco-developedbytheauthor (Castells et al. 2007), using a document corpus of over 150K documents and a Knowledge Base with over 35K concepts and 450K relations. Theevaluationofcontext-awareanduseradaptivesystemisadifficulttask(Yangand Padmanabhan 2005). In this thesis, we have adopted a two step evaluation approach. Firstly, we conducted a scenario based evaluation, based on simulated situations. This allowed us to have a bettercomprehensionofourapproachbehavior,togetherwithfinergrainedperformance analysis.Secondly,weperformedanevaluationtestswithrealhumansubjects.Thescopeof thissecondevaluationwastotestthefeasibilityofoursystemwithrealusersinteractingwith theretrievalsystem.Theresultsofbothevaluationswereencouraging,webelievethatboth 116 Conclusions and Future Work — Chapter 6 evaluation methodologies gave very relevant results on the specific goals which were designed andthatcanprovideagroundmethodologyinwhichsimilarsystemscanbeevaluatedonthe impact of context to personalization. Thisevaluationmethodologycanbeappliedtoanalyzetheperformanceofbothpersonalized and context-aware retrieval systems, but with different purposes. On the one hand, personalized systemscanbeevaluatedwithourmethodologyinordertoanalyzethebehaviorofthe personalizationapproach whendifferentsituations (contexts)arepresentedtotheuser,i.e.our evaluation approach can test how “precise” is the personalization approach. On the other hand, our evaluation approach can test if a context-aware system does not lose the overall perspective of the user’s trends, i.e. how the context-aware systems adjusts to the long-term interests of the user. In the case of systems that try to cover both personalization and contextualization (Ahn et al.2007;BillsusandPazzani2000;Sugiyamaetal.2004;Widyantoroetal.1997)our methodologycanprovidea,whatwebelieve,completeandtrulyevaluationofthecombined applicationoflongandshort-termuserinterestsintoaretrievalsystem.Theproposed methodologyintroducesthefollowingnoveltiestothepersonalizationandcontext-aware research area: -Anevaluationmethodologythatanalyzestheimpactofcontextover personalization: To the best of our knowledge, this is the first time an evaluation and analysisofthecombinationofapersonalizationandcontextualizationapproachhas been carried on. - Novelmethodologiesforadaptiveandinteractiveuserevaluations:Wehave introducedatwostepmethodologythatextendsthesimulatedsituationsdefinedby Borlund(2003)inordertoincludeasetofuserpreferences(eithersimulatedor assignedbyauser)andahypotheticalcontextualsituation.Inthescenariobased evaluationwehavealsoextendedthiscontextualsituationwithasimulateduser interactionmodeloftheuser,inordertoprovidethisinformationtoimplicitfeedback based techniques, which are widely adopted within context-aware retrieval systems. 6.2Discussion and Future Work 6.2.1Context Context,aspresentedinthisthesis,isseenasthethemesandconceptsrelatedtotheuser’s session and current interest focus. In order to get these related concepts, the system has first to monitortheinteractionsoftheuserwiththesystem.However,whentheuserstartsasearch Chapter 6 — Conclusions and Future Work 117 session, the system still does not have any contextual clue whatsoever. This problem is known ascoldstart.Theinitialselectedsolutionistonotfiltertheuser’spreferencesinthefirst iteration with the system, but 1) this would produce a decay on the performance of the system forthisinteraction(asseenintheexperiments)and2)thiswouldsupposethattheonlyuseful contextsourcecanbeonlytakenbyinteractionswiththesystem,whichiseasilyrefutable. Take, for instance, other interactions of the user with other applications (Dumais et al. 2003), or other sources of implicit contextual information, like the physical location of the user (Melucci 2005). Anotherlimitationofthecontextualizationapproachisthatitassumesthatconsecutiveuser queriestendtoberelated,whichdoesnotholdwhensuddenchangesofuserfocusoccurs.A desirable feature would be to automatically detect a user’s change of focus (Huang et al. 2004; Srirametal.2004),andbeingabletoselectwhichconcepts(ifnotall)ofthecurrentcontext representation are not more desirable for the subsequent preference filtering. Thisworkpresentsanadaptationoftheostensiveimplicitfeedbackmodel(Campbellandvan Rijsbergen1996)forthesemanticruntimecontextconstruction.Therearemoremodelsthat couldbefurtherexplored,Whiteetal.(2005b)madeanevaluationstudyondifferentimplicit feedbacktechniques,includingtechniquesbasedonthetermrankingmethodwpq(Robertson 1990),fromwhichtheostensivemodelisbasedon.Allstudiedapproachescanbeadaptedto our approach. Our selection was more motivated from the work of Campbell and van Rijsbergen (1996) than the results of White et al.’s evaluation. From the results of this evaluation, there are other approaches that seem more effective on extracting the user’s context, although they seem to depend largely on the type of interactive system the user is using (e.g. video, image or textual retrieval engine). One example is Jeffrey’s conditioning model, which gives a higher weight to conceptsappearingfirstinsearchinteractions,asthemodelreasonsthattheusercanbemore certainoftheirrelevancyoncethatshehasfurtherinteractedwiththesystem,andthatfurther interactioncanbealsomotivatedbycuriosity,ratherthanbytaskrelevancy.Itwouldbe interesting to extend our user studies in order to evaluate possible adaptations of these different implicitapproaches,especiallythosesuchastheJeffrey’smodel,whichhasacompletely different context acquisition conceptualization. As stated previously, this thesis did not focused on user profile learning approaches, as we felt thattheseareaverycomplexresearchareaontheirown,andweweremoreinterestedonthe applicationsofusercontexttopersonalization.Anyhow,severalapproachesbeginwitha contextacquisition,tolaterapplyanapproachofprofilelearningoverthisshort-termprofile. These learning approaches normally define learning and forgetting functions over the acquired 118 Conclusions and Future Work — Chapter 6 short-termprofile(Katiforietal.2008;WidmerandKubat1996).Forinstance,Katiforietal. (2008) define three learning levels: short interests, mezzanine interests and long-term interests, associatingeachlevelwithdifferentforgettingfunctionsandathresholdvaluethatmakes conceptsjumptoahigherlevel(shortÆmezzanineÆlong).Similarly,ourcontext representation can be exploited in order to create a semantic long-term user profile. In order to testoursystemwithprofilelearningfunctionality,ourevaluationmethodologywouldhaveto beextendedinordertoeithercontainpastusagehistoryinformation,orhypotheticalpast contextual representations. Ourcontextualizationapproachdependslargelyonourtechniqueforsemanticcontextualand preference expansion, based on an adaptation of CSA techniques. We have presented algorithms thatincludesomeimprovementsandparameterizationoptionsoverthesemanticexpansion technique,whichcanlargelyhelponthescalabilityofourapproach.Wehaveappliedthis expansionwithdifferentKBs,beingtheoneusedinthereportedexperiments(35Kconcepts and450Krelations)thebiggestoftheusedKBs,withexpansiontimerangingfrom0.3to1 seconds,dependingontheuserprofile.Ourexperiencetellsusthattheparametersthatmost impacthaveovertheexpansionprocessarethereductionfactorandthenumberofallowed expansionsteps(seesection4.5.1).Nowadays,thereareavailablelargerKBswhichwilladd valuableknowledgetoourexperimentalsystem.Forinstance,dbpedia 22 contains2.18M concepts and 218 M relations extracted from Wikipedia 23 . It would be interesting to see how our expansionprocesshandlessuchanamountoninformation,theimpactofsuchamountof knowledgecanhaveontheoverallsystem’sperformance,andthetradeoffbetweenthis performance and a reasonable inference time. This thesis presents a novel evaluation approach for context-aware and personalization systems, inwhichtheuserinterestsandtheusercontextaretakenintoconsideration.Wedofeelthat each of the two presented evaluation steps can be improved in a future evaluation, in a way to obtainmoreinsightsonourapproachandpossiblecomplementationsmentionedinthis conclusionssection.Thesizeoftheevaluateddatacouldbeincreased,employingmore simulated tasks on the scenario based evaluation and more users on the user centered evaluation. We could also complement the user centered approach with specific user questionnaires. We did applysomepost-questionnaires,butwefeltthatweretwogeneralanddidnotaddanyfurther insighttotheexperimentalresults.WorksconductedforinstancebyWhite(2004b)cangive 22 http://www.dbpedia.org 23 http://www.wikipedia.org Chapter 6 — Conclusions and Future Work 119 more insights on how to use post-questionnaires on interactive systems, however we will have toextendtheseinordertotakeintoconsiderationthepersonalizationeffect.Apossible extension of a data driven approach would be to simplify the system and make it available to the public, this would lead to obtaining valuable log information that is proven to give good insights on the performance of the evaluated systems (Dou et al. 2007). 6.2.2Semantics Asanyalgorithmthatexploitssemanticmetadata,boththequalityofthemetadatarelatedto anydocumentinthesearchspace,andtherichnessoftherepresentationoftheseconcepts withintheKBarecriticaltotheoverallperformance.Thepracticalproblemsinvolvedin meeting the latter conditions are the object of a large body of research on ontology construction (Staab and Studer 2004), semantic annotation (Dill et al. 2003; Kiryakov et al. 2004; Popov et al.2004),semanticintegration(KalfoglouandSchorlemmer2003;Noy2004),andontology alignment (Euzenat 2004), and are not the focus of this work. Yet this kind of metadata is still not extended in common corpora like the Web, although there are some exciting new initiatives such as dbpedia, centered on the annotation of Wikipedia, and creation of its correspondent KB in ontological language, by means of the application of language processing techniques. However,ourmodelisnotasrestrictiveastoneedformalconstructedontologies.Infact,our proposal takes relatively soft restrictions about the format of the KB, in the sense that only a set ofconceptsandasetofrelationsamongthemarerequired.Thegeneralityofourmodelwill alsoacceptmoresimpleknowledgerepresentations.Forinstance,thegrowingcorporaofuser based annotated content, known as folksonomies (Specia and Motta 2007), could very well suit ourmodel.FolksonomiesKBsresembleourframeworkofconcept-relatedcorpora,asithas content annotated by user tags, and users related to a set of concepts, i.e. the user generated tags. Goingtoaevenmoresimplerschema,wecouldusesimpletermcorrelationtechniques (AsnicarandTasso1997), inawaythatwecanbuildaconceptspacewithsimplecorrelation connections,werehighlycorrelatedconcepts(asforcorrelatedinthesamedocuments)would haveannon-labeledweightedrelation.Anotherstatisticalapproachwouldbetoapply dimensionality reduction techniques, such as Latent Semantic Indexing (Sun et al. 2005), which output is precisely a set of related concepts. Of course, we would need to measure the impact of using these simpler approaches, in the sense that they lack of named properties, which proved to be a key component on our semantic expansion approach. The results could be very interesting: on the one hand, our experimental setup uses a complex KB but generated independently from thedocumentcorpus,withoutprovidingcompleteknowledgecoverage.Ontheotherhand, conceptspaces,suchasfolksonomies,andothersemanticanalysistechniques,suchas 120 Conclusions and Future Work — Chapter 6 correlation spaces, or Latent Semantic Indexes, produce simplerKBs, but arecreated fromthe documentcorpusandofferamuchcompleteknowledgecoverage.Anyhow,thecurrent experimentswereperformedwithanontologysemi-automaticallycreatedandpopulatedwith Webscrappingtechniques(Popovetal.2004).Wewereabletostillobtain,evenusingthese kindofontologies,significantresults,withouthavingtoputtheeffortthatamanual construction ontology (with, presumably, a higher quality) would require. References Abowd, G. D., A. K. Dey, R. Orr and J. Brotherton (1997). Context-awareness in wearable and ubiquitouscomputing.FirstInternationalSymposiumonWearableComputers.(ISWC 97), Cambridge, MA,179-180. Adomavicius,G.andA.Tuzhilin(2005)."TowardtheNextGenerationofRecommender Systems: A Survey of the State-of-the-Art and Possible Extensions." IEEE Transactions on Knowledge and Data Engineering 17(6): 734-749. Ahn, J.-W., P. Brusilovsky, J. Grady, D. He and S. Syn (2007). Open user profiles for adaptive newssystems:helporharm?WWW'07:Proceedingsofthe16thinternational conference on World Wide Web. ACM Press,11-20. Akrivas,G.,M.Wallace,G.Andreou,G.StamouandS.Kollias(2002).Context-Sensitive SemanticQueryExpansion.ICAIS'02:Proceedingsofthe2002IEEEInternational ConferenceonArtificialIntelligenceSystems(ICAIS'02).IEEE,Divnomorskoe, Russia. Allan,J.(2003).OverviewoftheTREC2003HARDtrack.Proceedingsofthe12thText REtrieval Conferenc (TREC). Ardissono,L.andA.Goy(2000)."TailoringtheInteractionwithUsersinWebStores."User Modeling and User-Adapted Interaction 10(4): 251-303. Ardissono, L., A. Goy, G. Petrone, M. Segnan and P. Torasso (2003). INTRIGUE: Personalized recommendationoftouristattractionsfordesktopandhandsetdevices.Applied Artificial Intelligence, Special Issue on Artificial Intelligence for Cultural Heritage and Digital Libraries, Taylor. 17: 687-714. Aroyo,L.,P.Bellekens,M.Bjorkman,G.-J.Houben,P.AkkermansandA.Kaptein(2007). SenSeeFrameworkforPersonalizedAccesstoTVContent.InteractiveTV:aShared Experience: 156-165. Arvola, P., M. Junkkari and J. Kekäläinen (2005). Generalized contextualization method for XMLinformationretrieval.CIKM'05:Proceedingsofthe14thACMinternational conference on Information and knowledge management. ACM, Bremen,Germany,20- 27. Asnicar, F. and C. Tasso (1997). ifWeb: a Prototype of User Model-Based Intelligent Agent for Document Filtering and Navigation in the World Wide Web. Proc. of 6th International Conference on User Modelling, Chia Laguna, Sardinia, Italy. Badros,G.J.andS.R.Lawrence(2005).Methodsandsystemsforpersonalisednetwork searching. US Patent Application 20050131866. Baeza-Yates, R. and B. Ribeiro-Neto (1999). Modern Information Retrieval, Addison-Wesley. Barry,C.(1994)."User-definedrelevancecriteria:anexploratorystudy."J.Am.Soc.Inf.Sci. 45(3): 149-159. Bauer,T.andD.Leake(2001).Realtimeusercontextmodelingforinformationretrieval agents. CIKM '01: Proceedings of the tenth international conference on Information and knowledge management. ACM, Atlante, Georgia, USA,568-570. Bharat,K.(2000).SearchPad:explicitcaptureofsearchcontexttosupportWebsearch. Proceedingsofthe9thinternationalWorldWideWebconferenceonComputer 122References networks:theinternationaljournalofcomputerandtelecommunicationsnetworking. North-Holland, Amsterdam, The Netherlands,493-501. Bharat,K.,T.KambaandM.Albers(1998)."Personalized,interactivenewsontheWeb." Multimedia Syst. 6(5): 349-358. Billsus,D.,D.HilbertandD.Maynes-Aminzade(2005).Improvingproactiveinformation systems.IUI'05:Proceedingsofthe10thinternationalconferenceonIntelligentuser interfaces. ACM, San Diego, California, USA,159-166. Billsus, D. and M. Pazzani (2000). "User Modeling for Adaptive News Access." User Modeling and User-Adapted Interaction 10(2-3): 147-180. Borlund,P.(2003)."TheIIRevaluationmodel:aframeworkforevaluationofinteractive information retrieval systems." Information Research 8(3): paper no.152. Brin,S.andL.Page(1998)."Theanatomyofalarge-scalehypertextualWebsearchengine." Computer Networks and ISDN Systems 30: 107-117. Brown, P. J., J. D. Bovey and X. Chen (1997). "Context-aware applications: from the laboratory tothemarketplace."PersonalCommunications,IEEE[seealsoIEEEWireless Communications] 4(5): 58-64. Brusilovsky,P.,J.EklundandE.Schwarz(1998)."Web-basededucationforall:atoolfor developmentadaptivecourseware."ComputerNetworksandISDNSystems30(1--7): 291-300. Budzik,J.andK.Hammond(1999).Watson:AnticipatingandContextualizingInformation Needs. 62nd Annual Meeting of the American Society for Information Science. Budzik, J. and K. Hammond (2000). User interactions with everyday applications as context for just-in-time information access. IUI '00: Proceedings of the 5th international conference on Intelligent user interfaces. ACM, New Orleans, Louisiana, United State,44-51. Callan,J.,A.Smeaton,M.Beaulieu,P.Borlund,P.Brusilovsky,M.Chalmers,C.Lynch,J. Riedl,B.Smyth,U.StracciaandE.Toms(2003).PersonalisationandRecommender Systems in Digital Libraries. Joint NSF-EU DELOS Working Group Report. Campbell,I.andC.vanRijsbergen(1996).Theostensivemodelofdevelopinginformation needs.ProceedingsofCOLIS-96,2ndInternationalConferenceonConceptionsof Library Science,251-268. Cantador, I. n., M. Fernández, D. Vallet, P. Castells, J. r. m. Picault and M. Ribière (2008). A Multi-PurposeOntology-BasedApproachforPersonalisedContentFilteringand Retrieval. Advances in Semantic Media Adaptation and Personalization: 25-51. Castells, P., M. Fernandez and D. Vallet (2007). "An Adaptation of the Vector-Space Model for Ontology-BasedInformationRetrieval."IEEETransactionsonKnowledgeandData Engineering 19(2): 261-272. Castells,P.,M.Fernández,D.Vallet,P.MylonasandY.Avrithis(2005)."Self-Tuning Personalized Information Retrieval in an Ontology-Based Framework."3762: 977-986. Claypool,M.,P.Le,M.WasedandD.Brown(2001).Implicitinterestindicators.IUI'01: Proceedingsofthe6thinternationalconferenceonIntelligentuserinterfaces.ACM Press, Santa Fe, NM, USA,33-40. Cleverdon,C.W.,J.MillsandM.Keen(1966)."Factorsdeterminingtheperformanceof indexing systems." ASLIB Cranfield project, Cranfield. References123 Crestani, F. (1997). "Application of Spreading Activation Techniques in InformationRetrieval." Artif. Intell. Rev. 11(6): 453-482. Crestani,F.andP.Lee(1999).WebSCSA:Websearchbyconstrainedspreadingactivation. ADL'99:ProceedingsofresearchandTechnologyAdvancesinDigitalLibraries, 1999.,163-170. Crestani,F.andP.Lee(2000)."SearchingtheWebbyconstrainedspreadingactivation."Inf. Process. Manage. 36(4): 585-605. Chakrabarti, S., M. den Berg and B. Dom (1999). "Focused crawling: a new approach to topic- specificWebresourcediscovery."ComputerNetworksandISDNSystems31(11-16): 1623-1640. Chalmers, M. (2004). "A Historical View of Context." Computer Supported Cooperative Work (CSCW) 13(3): 223-247. Chen, C., M. Chen and Y. Sun (2002). "PVA: A Self-Adaptive Personal View Agent." Journal of Intelligent Information Systems 18(2): 173-194. Chen,L.andK.Sycara(1998).WebMate:apersonalagentforbrowsingandsearching. AGENTS'98:ProceedingsofthesecondinternationalconferenceonAutonomous agents. ACM,132-139. Chen, P.-M. and F.-C. Kuo (2000). "An information retrieval system based on a user profile." J. Syst. Softw. 54(1): 3-8. Chirita,P.-A.,C.FiranandW.Nejdl(2006).Summarizinglocalcontexttopersonalizeglobal websearch.CIKM'06:Proceedingsofthe15thACMinternationalconferenceon Information and knowledge management. ACM, Arlington, Virginia, USA 287-296. Chirita, P., W. Nejdl, R. Paiu and C. Kohlschütter (2005). Using ODP metadata to personalize search. SIGIR '05: Proceedings of the 28th annual international ACM SIGIR conference onResearchanddevelopmentininformationretrieval.ACM,Salvador,Brazil,178- 185. Chirita,P.A.,D.OlmedillaandW.Nejdl(2003).Findingrelatedhubsandauthorities.Web Congress, 2003. Proceedings. First Latin American, Santiago, Chile,214-215. Dasiopoulou,S.,V.Mezaris,I.Kompatsiaris,V.K.PapastathisandM.G.Strintzis(2005). "Knowledge-assisted semantic video object detection." Circuits and Systems for Video Technology, IEEE Transactions on 15(10): 1210-1224. DeBra,P.,A.Aerts,B.Berden,B.deLange,B.Rousseau,T.Santic,D.SmitsandN.Stash (1998)."AHA!TheAdaptiveHypermediaArchitecture."TheNewReviewof Hypermedia and Multimedia 4: 115-139. Dill, S., N. Eiron, D. Gibson, D. Gruhl, R. Guha, A. Jhingran, T. Kanungo, K. S. McCurley, S. Rajagopalan, A. Tomkins, J. A. Tomlin and J. Y. Zien (2003). "A Case for Automated Large Scale Semantic Annotation." Journal of Web Semantics 1(1): 115-132. Dou,Z.,R.SongandJ.Wen(2007).ALarge-scaleEvaluationandAnalysisofPersonalized SearchStrategies.pdf.WWW2007:Proceedingsofthe16thinternationalWorldWide Web conference, Banff, Alberta, Canada,572-581. Dumais, S., E. Cutrell, J. J. Cadiz, G. Jancke, R. Sarin and D. Robbins (2003). Stuff I've seen: a system for personal information retrieval and re-use. SIGIR '03: Proceedings of the 26th annualinternationalACMSIGIRconferenceonResearchanddevelopmentin informaion retrieval. ACM, Toronto, Canada,72-79. 124References Dwork,C.,R.Kumar,M.NaorandD.Sivakumar(2001).Rankaggregationmethodsforthe Web. World Wide Web. ACM Press, Hong Kong, Hong Kong 613-622. Edmonds,B.(1999).ThePragmaticRootsofContext.CONTEXT'99:Proceedingsofthe Second International and Interdisciplinary Conference on Modeling and Using Context. Springer-Verlag, Trento, Italy,119-132. Eisenstein,J.,J.VanderdoncktandA.Puerta(2000).Adaptingtomobilecontextswithuser- interface modeling. WMCSA '00: Proceedings of the Third IEEE Workshop on Mobile Computing Systems and Applications (WMCSA'00). IEEE, Monterey, California, USA. Encarnação,M.(1997).Multi-levelusersupportthroughadaptivehypermedia:ahighly application-independenthelpcomponent.IUI'97:Proceedingsofthe2ndinternational conference on Intelligent user interfaces. ACM, Orlando, Florida, USA,187-194. Euzenat,J.(2004).Evaluatingontologyalignmentmethods.ProceedingsoftheDagstuhl seminar on Semantic, Wadern, Germany,47-50. Fink, J. and A. Kobsa (2000). "A Review and Analysis of Commercial User Modeling Servers forPersonalizationontheWorldWideWeb."UserModelingandUser-Adapted Interaction 10(2-3): 209-249. Fink, J. and A. Kobsa (2002). "User Modeling for Personalized City Tours." Artif. Intell. Rev. 18(1): 33-74. Fink,J.,A.KobsaandA.Nill(1997).AdaptableandAdaptiveInformationAccessforAll Users,IncludingtheDisabledandtheElderly.UM1997:Proceedingsofthe6th InternationalConferenceonUserModelling.Springer,ChiaLaguna,Sardinia,Italy, 171-176. Finkelstein,L.,E.Gabrilovich,Y.Matias,E.Rivlin,Z.Solan,G.WolfmanandE.Ruppin (2001). Placing search in context: the concept revisited. World Wide Web, Hong Kong, Hong Kong,406-414. Furnas,G.W.,S.Deerwester,S.T.Dumais,T.K.Landauer,R.A.Harshman,L.A.Streeter and K. E. Lochbaum (1988). Information retrieval using a singular value decomposition modeloflatentsemanticstructure.SIGIR'88:Proceedingsofthe11thannual internationalACMSIGIRconferenceonResearchanddevelopmentininformation retrieval. ACM Press, Grenoble, France,465-480. Gauch,S.,J.ChaffeeandA.Pretschner(2003)."Ontology-basedpersonalizedsearchand browsing." Web Intelli. and Agent Sys. 1(3-4): 219-234. Gauch,S.,M.Speretta,A.ChandramouliandA.Micarelli(2007).UserProfilesfor Personalized Information Access. The Adaptive Web: 54-89. Guha,R.,R.McCoolandE.Miller(2003).Semanticsearch.WWW2003:Proceedingsofthe Twelfth International World Wide Web Conference, Budapest, Hungary,700-709. Hanumansetty,R.(2004).ModelBasedApproachforContextAwareandAdaptiveuser Interface Generation. Haveliwala,T.(2002).Topic-sensitivePageRank.WWW2002:ProceedingsoftheEleventh International World Wide Web Conference, Honolulu, Hawaii, USA,517-526. Heer,J., A.Newberger,C. Beckmann and J. Hong (2003). Liquid: Context-Aware Distributed Queries. UbiComp 2003: Ubiquitous Computing, Seattle, Washington, USA,140-148. Henzinger, M., B.-W. Chang, B. Milch and S. Brin (2003). Query-free news search. WWW '03: Proceedings of the 12th international conference on World Wide Web. ACM, Budapest, Hungary,1-10. References125 Hirsh,H.,C.BasuandB.Davison(2000)."Enablingtechnologies:learningtopersonalize." Communications of the ACM 46(8): 102-106. Huang, X., F. Peng, A. An and D. Schuurmans (2004). "Dynamic web log session identification withstatisticallanguagemodels."JournaloftheAmericanSocietyforInformation Science and Technology 55(14): 1290-1303. Jansen,B.,A.Spink,J.BatemanandT.Saracevic(1998)."Reallifeinformationretrieval:a study of user queries on the Web." SIGIR Forum 32(1): 5-17. Järvelin,K.andJ.Kekäläinen(2000).IRevaluationmethodsforretrievinghighlyrelevant documents.SIGIR'00:Proceedingsofthe23rdannualinternationalACMSIGIR conferenceonResearchanddevelopmentininformationretrieval.ACM,Athens, Greece,41-48. Jeh, G. and J. Widom (2003). Scaling personalized web search. WWW '03: Proceedings of the 12thinternationalconferenceonWorldWideWeb.ACM,Budapest,Hungary,271- 279. Jose, J. M. and J. Urban (2006). "EGO: A Personalised Multimedia Management and Retrieval Tool."InternationalJournalofIntelligentSystems(Specialissueon"Intelligent Multimedia Retrieval") 21(7): 725-745. Kalfoglou,Y.andM.Schorlemmer(2003)."Ontologymapping:thestateoftheart."Knowl. Eng. Rev. 18(1): 1-31. Katifori,A.,C.VassilakisandA.Dix(2008).UsingSpreadingActivationthroughOntologies toSupportPersonalInformationManagement.CSKGOI'08:ProceedingsofWorkshop onCommonSenseKnowledgeandGoal-OrientedInterfaces,inIUI2008,Canary Islands, Spain. Kelly,D.andJ.Teevan(2003)."ImplicitFeedbackforInferringUserPreference:A Bibliography." SIGIR Forum 32(2): 18-28. Kerschberg,L.,W.KimandA.Scime(2001).ASemanticTaxonomy-BasedPersonalizable Meta-Search Agent. WISE '01: Proceedings of the Second International Conference on Web Information Systems Engineering (WISE'01) Volume 1. IEEE, Kyoto, Japan. Kim,H.andP.Chan(2003).Learningimplicituserinteresthierarchyforcontextin personalization.IUI'03:Proceedingsofthe8thinternationalconferenceonIntelligent user interfaces. ACM, Miami, USA,101-108. Kiryakov, A., B. Popov, I. Terziev, D. Manov and D. Ognyanoff (2004). "Semantic annotation, indexing,andretrieval."WebSemantics:Science,ServicesandAgentsontheWorld Wide Web 2(1): 49-79. Kobsa,A.(2001)."GenericUserModelingSystems."UserModelingandUser-Adapted Interaction 11(1-2): 49-63. Koutrika,G.andY.Ioannidis(2005).AUnifiedUserProfileFrameworkforQuery DisambiguationandPersonalization.PIA2005:ProceedingsofWorkshoponNew Technologies for Personalized Information Access. Kraft,R.,C.Chang,F.MaghoulandR.Kumar(2006).SearchingwithContext.WWW'06: Proceedingsofthe15thinternationalconferenceonWorldWideWeb.ACM, Edinburgh, Scotland,367-376. Kraft, R., F. Maghoul and C. Chang (2005). Y!Q: contextual search at the point of inspiration. CIKM'05:Proceedingsofthe14thACMinternationalconferenceonInformationand knowledge management. ACM, Bremen, Germany,816-823. 126References Krovetz,R.andB.Croft(1992)."Lexicalambiguityandinformationretrieval."ACMTrans. Inf. Syst. 10(2): 115-141. Krulwich,B.andC.Burkey(1997)."TheInfoFinderagent:learninguserintereststhrough heuristic phrase extraction." IEEE Expert [see also IEEE Intelligent Systems and Their Applications] 12(5): 22-27. Lang, K. (1995). NewsWeeder: learning to filter netnews. Proceedings of the 12th International Conference on Machine Learning. Morgan Kaufmann publishers Inc.: San Mateo, CA, USA,331-339. Lee, J. (1997). Analyses of multiple evidence combination. SIGIR '97: Proceedings of the 20th annualinternationalACMSIGIRconferenceonResearchanddevelopmentin information retrieval. ACM, New York, NY, USA,267-276. Leroy,G.,A.LallyandH.Chen(2003)."Theuseofdynamiccontextstoimprovecasual internet searching." ACM Trans. Inf. Syst. 21(3): 229-253. Lieberman,H.(1995).Letizia,anagentthatassistswebbrowsing.IJCAI95:Proceedingsof InternationalJointProceedingsoftheFourteenthInternationalJointConferenceon Artificial Intelligence. Morgan Kaufmann,924-929. Liu,F.,C.YuandW.Meng(2004)."PersonalizedWebSearchForImprovingRetrieval Effectiveness." IEEE Transactions on Knowledge and Data Engineering 16(1): 28-40. Ma, Z., G. Pant and Olivia (2007). "Interest-based personalized search." ACM Trans. Inf. Syst. 25(1). Manmatha,R.,T.RathandF.Feng(2001).Modelingscoredistributionsforcombiningthe outputsofsearchengines.SIGIR'01:Proceedingsofthe24thannualinternational ACM SIGIRconference on Research and development in information retrieval. ACM, New York, NY, USA,267-275. Martin,I.andJ.Jose(2004).Fetch:Apersonalisedinformationretrievaltool.RIAO2004: Proceedingsofthe8thRecherched'InformationAssisteparOrdinateur(computer assisted information retrieval), Avignon, France,405-419. Mathes,A.(2004)."Folksonomies-CooperativeClassificationandCommunicationThrough SharedMetadata."fromhttp://www.adammathes.com/academic/computer-mediated- communication/folksonomies.pdf Melucci,M.(2005).Contextmodelinganddiscoveryusingvectorspacebases.CIKM'05: Proceedingsofthe14thACMinternationalconferenceonInformationandknowledge management. ACM, Bremen, Germany,808-815. Micarelli,A.,F.Gasparetti,F.SciarroneandS.Gauch(2007).PersonalizedSearchonthe World Wide Web. The Adaptive Web: 195-230. Micarelli, A. and F. Sciarrone (2004). "Anatomy and Empirical Evaluation of an Adaptive Web- BasedInformationFilteringSystem."UserModelingandUser-AdaptedInteraction 14(2-3): 159-200. Middleton,S.,N.ShadboltandD.DeRoure(2003).Capturinginterestthroughinferenceand visualization:ontologicaluserprofilinginrecommendersystems.K-CAP'03: Proceedingsofthe2ndinternationalconferenceonKnowledgecapture.ACMPress, 62-69. Mitrovic,N.andE.Mena(2002).AdaptiveUserInterfaceforMobileDevices.DSV-IS'02: Proceedingsofthe9thInternationalWorkshoponInteractiveSystems.Design, Specification, and Verification. Springer-Verlag, Dublin, Ireland,29-43. References127 Montaner, M., B. López and J. L. Rosa (2003). "A Taxonomy of Recommender Agents on the Internet." Artificial Intelligence Review 19(4): 285-330. Noll,M.andC.Meinel(2007).WebSearchPersonalizationviaSocialBookmarkingand Tagging. ISWIC 2007: Proceedings of the 6th International Semantic Web Conference, 367-380. Noy, N. (2004). "Semantic integration: a survey of ontology-based approaches." SIGMOD Rec. 33(4): 65-70. Perrault,R.,J.AllenandP.Cohen(1978).Speechactsasabasisforunderstandingdialogue coherence. Proceedings of the 1978 workshop on Theoretical issues in natural language processing. Association, Urbana-Campaign, Illinois, United States,125-132. Pitkow,J.,H.Schuatze,T.Cass,R.Cooley,D.Turnbull,A.Edmonds,E.AdarandT.Breuel (2002). "Personalized search." Commun. ACM 45(9): 50-55. Popov,B.,A.Kiryakov,D.Ognyanoff,D.ManovandA.Kirilov(2004)."KIM-asemantic platformforinformationextractionandretrieval."JournalofNaturalLanguage Engineering 10(3-4): 375-392. Renda,E.andU.Straccia(2003).Webmetasearch:rankvs.scorebasedrankaggregation methods.SAC'03:Proceedingsofthe2003ACMsymposiumonAppliedcomputing. ACM, New York, NY, USA,841-846. Rhodes,B.J.andP.Maes(2000)."Just-in-timeinformationretrievalagents."IBMSyst.J. 39(3-4): 685-704. Ricardo and Berthier (1999). Modern Information Retrieval, ACM. Rich, E. (1998). User modeling via stereotypes. Readings in intelligent user interfaces Morgan Kaufmann: 329-342. Robertson, S. E. (1990). "On term selection for query expansion." J. Doc. 46(4): 359-364. Rocchio, J. and G. Salton (1971). Relevance feedback in information retrieval, Prentice-Hall. Rocha,C.,D.SchwabeandM.deAragao(2004).Ahybridapproachforsearchinginthe semantic web. WWW2004: Proceedings of the 13th international conference on World Wide Web. New York, NY, USA. Sakagami,H.andT.Kamba(1997)."Learningpersonalpreferencesononlinenewspaper articles from user behaviors." Comput. Netw. ISDN Syst. 29(8-13): 1447-1455. Salton,G.andC.Buckley(1988).Ontheuseofspreadingactivationmethodsinautomatic information.SIGIR'88:Proceedingsofthe11thannualinternationalACMSIGIR conference on Research and development in information retrieval. ACM,147-160. Salton,G.andC.Buckley(1990)."Improvingretrievalperformancebyrelevancefeedback." Journal of the American Society for Information Science 41(4): 288-297. Salton, G. and M. McGill (1986). Introduction to Modern Information Retrieval, McGraw-Hill. Schafer,J.,D.Frankowski,J.HerlockerandS.Sen(2007).CollaborativeFiltering Recommender Systems. The Adaptive Web: 291-324. Seo,Y.andB.Zhang(2001)."Personalizedweb-documentfilteringusingreinforcement learning." Applied Artificial Intelligence. Shen,X.,B.TanandC.Zhai(2005a).Context-sensitiveinformationretrievalusingimplicit feedback.SIGIR'05:Proceedingsofthe28thannualinternationalACMSIGIR 128References conferenceonResearchanddevelopmentininformationretrieval.ACM,Salvador, Brazil,43-50. Shen,X.,B.TanandC.Zhai(2005b).Implicitusermodelingforpersonalizedsearch.CIKM '05:Proceedingsofthe14thACMinternationalconferenceonInformationand knowledge management. ACM, Bremen, Germany,824-831. Sheth, B. and P. Maes (1993). Evolving agents for personalized information filtering. Artificial Intelligence for Applications, 1993. Proceedings., Ninth Conference on,345-352. Sieg,A.,B.MobasherandR.Burke(2007).OntologicalUserProfilesforPersonalizedWeb Search.ITWP07:ProceedingsoftheIntelligentTechniquesforWebPersonalization Workshop, in the 22nd National Conference on Artificial Intelligence (AAAI 2007). Smeaton,A.andJ.Callan(2001).ERCIM2001:Proceedingsofthe2ndDELOSNetworkof ExcellenceWorkshoponPersonalisationandRecommenderSystemsinDigital Libraries. Dublin, Ireland. Smith,B.R.,G.D.LindenandN.K.Zada(2005).Contentpersonalisationbasedonactions performed during a current browsing session. US Patent Application 6853983B2. Specia, L. and E. Motta (2007). Integrating Folksonomies with the Semantic Web. ESWC 2007: Proceedings of 4th European Semantic Web Conference site, Innsbruck, Austria. Speretta,M.andS.Gauch(2005).Personalizedsearchbasedonusersearchhistories. Proceedings. The 2005 IEEE/WIC/ACM International Conference on Web Intelligence, 622-628. Sriram, S., X. Shen and C. Zhai (2004). A session-based search engine (Poster). Proceedings of SIGIR 2004. Staab,S.andR.Studer(2004).HandbookonOntologies.BerlinHeidelbergNewYork, SpringerVerlag. Stojanovic,N.,R.StuderandL.Stojanovic(2003).AnApproachfortheRankingofQuery results in the Semantic Web. The Semantic Web. Springer,500-516. Sugiyama, K., K. Hatano and M. Yoshikawa (2004). Adaptive web search based on user profile constructedwithoutanyeffortfromusers.WWW2004:Proceedingsofthe13th international conference on World Wide Web. ACM, New York, NY, USA,675-684. Sun,J.-T.,H.-J.Zeng,H.Liu,Y.LuandZ.Chen(2005).CubeSVD:anovelapproachto personalized Web search. WWW '05: Proceedings of the 14th international conference on World Wide Web. ACM, Chiba, Japan,382-390. Tan,B.,X.ShenandC.Zhai(2006).Mininglong-termsearchhistorytoimprovesearch accuracy.KDD'06:Proceedingsofthe12thACMSIGKDDinternationalconference on Knowledge discovery and data mining. ACM, Philadelphia, PA, USA,718-723. Tanudjaja,F.andL.Mui(2002).Persona:acontextualizedandpersonalizedwebsearch. SystemSciences,2002.HICSS.Proceedingsofthe35thAnnualHawaiiInternational Conference on, Hawaii, US,1232-1240. Teevan,J.,S.DumaisandE.Horvitz(2005).Personalizingsearchviaautomatedanalysisof interestsandactivities.SIGIR'05:Proceedingsofthe28thannualinternationalACM SIGIRconferenceonResearchanddevelopmentininformationretrieval.ACM, Salvador, Brazil,449-456. Terveen,L.andW.Hill(2001).BeyondRecommenderSystems:HelpingPeopleHelpEach Other, Addison. References129 Thomas, P. and D. Hawking (2006). Evaluation by comparing result sets in context. CIKM '06: Proceedingsofthe15thACMinternationalconferenceonInformationandknowledge management. ACM, Arlington, Virginia, USA,94-101. Vallet,D.,M.FernandezandP.Castells(2005).AnOntology-BasedInformationRetrieval Model.LectureNotesinComputerScience:TheSemanticWeb:Researchand Applications. Springer, Heraklion, Greece,455-470. Vassileva,J.(1997)."DynamicCourseGenerationontheWWW."BritishJournalof Educational Technologies 29(1): 5-14. Vishnu, K. (2005). Contextual Information Retrieval Using Ontology Based User Profiles. Vogt, C. and G. Cottrell (1999). "Fusion Via a Linear Combination of Scores." Inf. Retr. 1(3): 151-173. Wallace,M.andG.Stamou(2002).Towardsacontextawareminingofuserinterestsfor consumptionofmultimediadocuments.MultimediaandExpo,2002.ICME'02. Proceedings. 2002 IEEE International Conference on, Lausanne, Switzerland,733-736 vol.731. White,R.(2004a).ContextualSimulationsforInformationRetrievalEvaluation.IRIX2004: Workshop on Information Retrieval in Context, at the 27th Annual International ACM SIGIRConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR 2004) Sheffield, UK. White, R. (2004b). Implicit Feedback for Interactive Information retrieval. White,R.,J.JoseandI.Ruthven(2006)."Animplicitfeedbackapproachforinteractive information retrieval." Inf. Process. Manage. 42(1): 166-190. White,R.andD.Kelly(2006).Astudyontheeffectsofpersonalizationandtaskinformation onimplicitfeedbackperformance.CIKM'06:Proceedingsofthe15thACM international conference on Information and knowledge management. ACM, Arlington, Virginia, USA,297-306. White,R.,I.RuthvenandJ.Jose(2005a).Astudyoffactorsaffectingtheutilityofimplicit relevancefeedback.SIGIR'05:Proceedingsofthe28thannualinternationalACM SIGIRconferenceonResearchanddevelopmentininformationretrieval.ACM, Salvador, Brazil,35-42. White, R., I. Ruthven, J. Jose and C. J. Van Rijsbergen (2005b). "Evaluating implicit feedback models using searcher simulations." ACM Trans. Inf. Syst. 23(3): 325-361. Whitworth, W. (1965). Choice and Chance, with one thousand exercises Books, Hafner. Widmer,G.andM.Kubat(1996)."Learninginthepresenceofconceptdriftandhidden contexts." Machine Learning 23(1): 69-101. Widyantoro,D.,T.IoergerandJ.Yen(1999).Anadaptivealgorithmforlearningchangesin userinterests.CIKM'99:Proceedingsoftheeighthinternationalconferenceon Information and knowledge management. ACM, Kansas City, Missouri, United States, 405-412. Widyantoro, D., J. Yin, M. Seif, E. Nasr, L. Yang, A. Zacchi and J. Yen (1997). Alipes: A swift messengerincyberspace.ProceedingsofAAAISpringSymposiumonIntelligent Agents in Cyberspace,62-67. Wilkinson,R.andM.WuEvaluationExperimentsandExperiencefromthePerspectiveof InteractiveInformationRetrieval.Proceedingsofthe3rdWorkshoponEmpirical Evaluation ofAdaptiveSystems,inconjunctionwiththe2ndInternationalConference 130References onAdaptiveHypermediaandAdaptiveWeb-BasedSystems,Eindhoven,The Netherlands,,221-230. Yang,Y.andB.Padmanabhan(2005)."EvaluationofOnlinePersonalizationSystems:A SurveyofEvaluationSchemesandAKnowledge-BasedApproach."Journalof Electronic Commerce Research 6(2): 112-122. Yuen,L.,M.Chang,Y.K.LaiandC.Poon(2004).Excalibur:apersonalizedmetasearch engine. COMPSAC 2004: Proceedings of the 28th Annual International Conference on Computer Software and Applications,49-50 vol.42. Zamir,O.E.,J.L.Korn,A.B.FikesandS.R.Lawrence(2005).PersonalizationofPlaced Content Ordering in Search Results. US Patent Application 20050240580. Zigoris,P.andY.Zhang(2006).Bayesianadaptiveuserprofilingwithexplicit&implicit feedback.CIKM'06:Proceedingsofthe15thACMinternationalconferenceon Information and knowledge management. ACM, Arlington, Virginia, USA,397-404. Appendices Appendix A A. Detailed Results for the Scenario Based Experiments This appendix will give more insights on the simulated tasks used and the results obtained from the scenario based experiments. Each task description includes: ƒTopic: The last query that the user issued. ƒRelevancytocontext:Givesindicationstowhichdocumentsshouldbeconsideras relevanttotheactualcontextoftheuser,describedbythecurrentretrievalsession interactions. ƒRelevancy to preferences: Indicates when a document must be considered relevant to the user interests. ƒInteractionmodel:Givesthedetailedinteractionstepsthattheuserfollowedbefore issuing the last query. ƒPrecision and Recall: The resultant PR graph for this specific task Appendix A133 Task 1. Stock shares: Banking sector companies Topic Stock shares Relevancy to context Relevantdocumentsarethosewhomentionsharestocksaboutcompaniesrelatedtothe banking sector Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswhenthecompanyhasapositive interest in the user profile. Interaction model 1.Query input[semantic]: Companies active in the banking sector 2.Opened document: n=3, docId=021452 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 134Appendix A Task 2. Companies trading in the NYSE: The Hilton Company Topic Companies that trade on the New York Stock Exchange and their market brands. Relevancy to context AdocumentisrelevantifitmentionstheHiltonCompanyandtheirhotelchain“Hampton Inn”. The document must indicate the relation between this company and their hotel chain. Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswheneitherthecompanyorthe company’s brand has a positive interest in the user profile. Interaction model 1.Query input[semantic]: Hilton Company 2.Opened document: n=1, docId=121475 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Appendix A135 Task 3. Companies and their brands: Homewood Suites hotel chain Topic Companies and their market brands Relevancy to context Relevantdocumentsmustcontainthehotelchaing“HomewoodSuites”andthecompany who owns it: Hilton Co. Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswheneitherthecompanyorthe company’s brand has a positive interest in the user profile. Interaction model 1.Query input[semantic]: Homewood suites brand 2.Opened document: n=1, docId=147562 3.Opened document: n=2, docId=012457 4.Opened document: n=3, docId=032122 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 136Appendix A Task 4. Companies and their brands: Public Companies active in the Food, Beverage and Tobacco sector Topic Companies and their market brands. Relevancy to context Relevant documents are those who mention a Public company or a company that has a partial state support together with their market brand (e.g. Kellogs Co. and Kellogs). Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswheneitherthecompanyorthe company’s brand has a positive interest in the user profile. Interaction model 1.Query input[semantic]: Compnies active in the Food, Beverage and Tobacco sector 2.Opened document: n=1, docId=018546 3.Opened document: n=2, docId=064552 4.Opened document: n=3, docId=078455 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Appendix A137 Task 5. Companies with high Fiscal Net Income: Japan based companies Topic Companies with Fiscal Net Income > $100M. Relevancy to context RelevantdocumentsarethosewhomentionacompanybasedonJapanthathasahigh average Fiscal Net Income. Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswhenthecompanyhasapositive interest in the user profile. Interaction model 1.Query input[semantic]: Tokyo city 2.Query input[semantic]: Kyoto city 3.Opened document: n=3, docId=12669 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 138Appendix A Task 6. Companies trading in the NYSE: Companies based on the USA Topic Companies that trade on the New York Stock Exchange. Relevancy to context Relevant documents are those who trade in the NYSE and are based on the USA Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswhenthecompanyhasapositive interest in the user profile. Interaction model 1.Query input[Keyword]: Miami Chicago 2.Opened document: n=1, docId=113425 3.Opened document: n=2, docId=051425 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Appendix A139 Task 7. Companies that have child organization: Companies that own a Magazine related branch Topic Companies and their child organizations Relevancy to context Relevantdocumentsarethosewhomentionacompanythathappenstohaveachild organization that is related to the Magazine sector (e.g. Time Co. and Times Magazine) Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswhenthecompanyhasapositive interest in the user profile. Interaction model 1.Query input[semantic]: Companies that own a magazine 2.Opened document: n=3, docId=089415 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 140Appendix A Task 8. Travel: Airline companies that trade on NASDAQ Topic Travel Relevancy to context Relevant documents are those who mention an airline company that trades on the NASDAQ stock exchange Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswhenthecompanyhasapositive interest in the user profile. Interaction model 1.Query input[semantic]: Companies that trade on NASDAQ 2.Query input[semantic]: Airline companies Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Appendix A141 Task 9. Companies trading in the NYSE: Car industry Companies Topic Companies that trade on the New York Stock Exchange and their market brands Relevancy to context Consider a document relevant to the task if it mentions a company active in the car industry sector, together with the brand that has in the market. The document has to explicitly mention this relation of ownership between the company and the brand. Relevancy to preferences Considerthatthedocumentadjuststoyourpreferenceswheneitherthecompanyorthe company’s brand has a positive interest in the user profile. Interaction model 1.Query input[keyword]: Mercedes Maybach 2.Opened document: n=1, docId=154235 3.Opened document: n=2, docId=075482 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off 142Appendix A Task 10. Oil enegy in Irak: North American companies active in the energy sector Topic Oil energy in Irak. Relevancy to context Relevant documents are those who mention North American based companies that are active in the energy sector Relevancy to preferences Consider that the document adjusts to your preferences when the company is (or is partially) publicly owned. Interaction model 3.Query input[semantic]: American companies active in energy sector 4.Opened document: n=1, docId=004585 Precision and Recall 0,0 0,2 0,4 0,6 0,8 1,0 Recall 0,0 0,2 0,4 0,6 0,8 1,0 P r e c i s i o n Contextual Personalization Simple Personalization Personalization Off Appendix B B. User Centered Evaluation Task Descriptions This appendix gives the task descriptions for the three retrieval tasks used in the user centered evaluation approach. Each task description contains: ƒRelevancy to task: Gives indications to which documents must be consider relevant to the task, can be considered a task description. ƒRelevancy to preferences: Indicates when a document must be considered as relevant to the user’s interests. ƒExampleofrelevantdocument:Givesasnippetofadocumentthatisconsidered relevant to the task 144Appendix A Task 1: Agreements between companies Relevancy to task Relevant documents are those that state an agreement between two companies, the article must name the two companies explicitly. For instance, articles about collaboration or an investment agreement between two companies are considered relevant. Agreements were one company buy totally or partially another company are NOT considered relevant. Relevancy to preferences Considerthatthearticleadjuststoyourpreferenceswhenoneofthementioned companies has a positive value in your user profile. Example of relevant document to the task (excerpt) CNN.com - Microsoft, AOL settle lawsuit - May. 30, 2003 Microsoft, AOL settle lawsuit Thetwocompaniesalsosaidtheyhavereachedawide-rangingcooperative agreement, under which they will explore ways to allow people using their competing instant message systems to communicate with each other. MicrosofthasalsoagreedtoprovideAmericaOnlinesoftwaretosomecomputer manufacturers. Appendix B145 Task 2: Release of a new electronic gadget Relevancy to task Relevantdocumentmustmentionthereleaseofanewelectronicproduct.Exampleof electronic products are music players, gaming devices, PCs, flat screens, mobile devices etc...Itmustbeasubstantialproduct.Forinstance,asoftwareprogramisconsidernon relevant Relevancy to preferences Consider that the article adjusts to your preferences when the company or companies that launch the product a positive value in your user profile. Example of relevant document to the task (excerpt) CNN.com - Microsoft, AOL settle lawsuit - May. 30, 2003 CNN.com - Will fans want their MTV PC? - January 13, 2002 Will fans want their MTV PC? Thepioneerofmusic-orientedTVislookingtotemptmedia-hungrytechnophiles with a line of PCs and complementary products set for release early this year. Targeting 18-to-24-year-olds, MTV is looking to let that gadget-happy demographic watch TV, play DVDs, listen to music and browse the Internet, all on one device. The company,aunitofViacomInternational,alsoexpectstolaunchalineofproducts centered around video-game play, according to a statement. 146Appendix B Task 3: Cities hosting a motor sport related event Relevancy to task Relevantdocumentmustdescribeanupcomingmotorsport(e.g.motorcycle,formula one, car rally) together with information on the city that is hosting this event. Relevancy to preferences Considerthatthearticleadjuststoyourpreferenceswhenthehostingcitybelongstoa countrythatyouhavemarkedaspreferredinyouruserprofile.Youcanalsoconsider relevantthosedocumentthatmentionamotorsportthathasapositivevalueinyour profile. Example of relevant document to the task (excerpt) CNN.com - Canadian Grand Prix given reprieve - Oct. 16, 2003 Canadian Grand Prix given reprieve TheInternationalAutomobileFederation(FIA)issuedarevisedcalendarwith Montrealincludedasanadditional18thrace,tobeheldonJune13beforetheU.S. Grand Prix at Indianapolis on June 20.


Comments

Copyright © 2025 UPDOCS Inc.