1 / 33

A Personalized E-Learning System Based on User Profile Constructed Using Information Fusion

A Personalized E-Learning System Based on User Profile Constructed Using Information Fusion. Xin Li and Shi-Kuo Chang Department of Computer Science, University of Pittsburgh, USA, In Proceedings of the 11th International Conference on Distributed Multimedia Systems, pages 109–114, 2005.

katima
Télécharger la présentation

A Personalized E-Learning System Based on User Profile Constructed Using Information Fusion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Personalized E-Learning System Based on User Profile Constructed UsingInformation Fusion Xin Li and Shi-Kuo Chang Department of Computer Science, University of Pittsburgh, USA, In Proceedings of the 11th International Conference onDistributed Multimedia Systems, pages 109–114, 2005. Presenter: Hsiao-Pei Chang

  2. Outline • Introduction • Related Research • System Architecture • Feedback Extractor &User Profiler • Experiments • Discussion and Future Research

  3. Introduction (1/4) • E-learning is indeed a revolutionary way to provide education in life long term. • However, high diversity of the learners on the Internet poses new challenges to the traditional “one-size-fit-all” learning model, in which a single set of learning resource is provided to all learners. • It is of great importance to provide a personalized system which can automatically adapt to the interests and levels of learners.

  4. Introduction (2/4) • User profiling is a promising approach towards the personalized e-learning systems where user profile including interests, levels and learning patterns can be assessed during the learning process. • Based upon the profile, personalized learning resource could be generated to match the individual preferences and levels. • User profiling is also the key process of many other applications: • Recommendation systems • The personalized web search engine

  5. Introduction (3/4) • Most approaches of user profiling are heavily depending on the user feedbacks to construct user profiles. • The feedback can be assessed explicitly by rating, or implicitly by the user behaviors such as print and save. • This paper propose a system which can combine multiple feedback measuresto get more complete and accurate profiles using the information fusion techniques.

  6. Introduction (4/4) • Our e-learning system is designed based upon the IEEE Learning Technology Systems Architecture (LTSA). • A feedback extractor with fusion capability is designed to combine multiple feedback measures. • User profile, which stores user preferences and levels of expertise, is collected by user profiler to deliver personalized information using the collaborative filtering algorithm.

  7. Related Research – Personalized E-Learning System • Blochl et al. [2] proposed an adaptive learning system which can incorporate psychological aspects of learning process into the user profile to deliver individualized learning resource. • SPERO [10] is a personalized e-learning system based on the IEEE Learning Technology Systems Architecture (LTSA). It could provide different contents for the foreign language learners according their interests and levels. • Problem: too much extra work.

  8. Related Research – Recommendation Systems • Depending on underline technique, recommendation systems can be divided into collaborative filtering-based [8], content-based [4] and hybrid [1, 9] approaches. • Classified by means to acquire feedback, they can be categorized as explicit rating [1, 8, 9], implicit rating [8] and no rating needed [4] systems. • In fact, user‘s feedbacks are so important that only very few content-based recommendation systems require neither explicit rating nor implicit rating.

  9. Related Research – Information Fusion • The key commonality underlying applications which require information fusion is that they need retrieve information on the same object from multiple data sources[5]. • For example, in our approach, the multiple indicators are available to assess user preference; it is a fusion problem to combine them to get more complete and accurate results. • Goal : a joint combined declaration from individual indicators. • Techniques : include voting, Bayesian inference, and so on.

  10. System Architecture (1/2) • The IEEE Learning Technology Systems Architecture (IEEE LTSA) is a component-based framework for general learning system with high scalability and reusability. It includes three types of components:Processes, Stores, Flows

  11. System Architecture (2/2) • In our system, we instantiate the abstract conceptual models in IEEE LTSA by the real components shown in Figure 2: • Learning Resources, Learner's Record is implemented by User Profile, Evaluation Entity is implemented by Feedback Extractor, Delivery Entity is implemented by Learner Client, Coach Entity is implemented by User Profiler

  12. Learning Resources and User Profile • Learning Resources (e.g. WebPages) are organized by the topics which are structured in ontology knowledge base (OKB).The each topic is attached with several keywords.

  13. Definition 4.1 a webpage pg is a 4-tuple: pg = <id, co, tp, l> where • id is a unique identification number. • co is the content of pg. • tp is the topic. • l is the expertise level of pg such as beginning, intermediate and advanced. • Definition 4.2 A user profile upf is a 4-tuple: upf = <id, bh, ch, ls> where • id is a unique identification number. • bh is the browsing history represented as {<pg1,r1>, <pg2,r2>, …, <pgn, rn>},where pgi is a webpage read by the user; ri is user preference on pgi in [0..1] (i=1, 2, …, n). • ch is the chatting history in natural languages. • ls is the levels of expertise represented as {<tp1,l1>, <tp2,l2>, …, <tpn, ln>}, where tpi is a topics; li is the levels on tpi in terms of beginning, intermediate and advanced (i=1, 2, …, n).

  14. Feedback Extractor • It collects feedbacks to make a final assessment of user preference. • Feedback indicator for a webpage pg is a function which returns 0 or 1, where 0/1 means the negative/positive correlation with user preference. • Four implicit feedback indicators are employed. • Reading Time: return 1 if user read pg longer than φt, where φt is a predefined threshold; 0 otherwise. • Scroll: return 1 if the number of user scrolls on pg is greater than φs • Print/Save: return 1 if user prints/saves pg • Relational Index: return 1 if keywords of pg appear in user’s chatting history ch more than φr times

  15. Feedback Extractor - Fusion Model • To compile all these feedbacks for a final assessment of user preference, we map the problem into an information fusion process. • Define H is a hypothesis that user has positive preference, given independent indicators I1, I2, ..., In (n>1), the posteriori probability of P(H| I1, I2, ..., In) is the joint declaration, which can be assessed using Bayesian method: are called model parameters which can be assessed through the statistical analysis on training data.

  16. User Profiler (1/2) • Briefly User Profiler has two tasks: 1. Assessment of Expertise Level • Input: user’s browsing history with the preference assessed by Feedback Extractor. • Output: user’s levels of expertise. 2. Providing Guideline for Delivery • Input: user’s browsing history and levels of expertise. • Output: a list of WebPages, which are potentially interesting to users.

  17. User Profiler (2/2) • Assessment of Expertise Level • Basically expertise levels are determined by the average preferences. The WebPages user has read on any topic tp could have different levels in terms of beginning, intermediate and advanced. The level of the user on tp is the one which has the highest average preference. • Guideline for Delivery • The information delivery is based on the collaborative filtering algorithm.

  18. User Profiler - Guideline for Delivery • Given two users U1and U2, pg1, pg2, …, pgnare the common pages they both read, with the feedback x1, x2, …, xnand y1, y2, …, yn respectively. Assume the average feedbacks of the page pg1, pg2, …, pgnare ϖ1, ϖ2, …, ϖn, a similarity function S on U1and U2is defined using Pearson correlation coefficient:

  19. Given any active user Ux, using (1) could find the n users who have the highest similarity, named n neighbors {U1, U2, …, Un} of Ux, the preference of Uxon page pg — pxcan be predicted by the preferences of the neighbors which is already known, denoted as p1, p2, …, pn. Given ϖ is the average rating on page pg, • The WebPages with the highest interest predictions are the potential interesting pages for Ux, which will be delivered without requests.

  20. Prototype System

  21. Experiments • For system training purpose, we ask a group of students to do the following experiments: • Step 1: select a topic such as “E-R diagram” and “C++”, let the students indicate their levels on it in terms of beginning, intermediate, or advanced. • Step 2: ask the students to use leaner client (Figure 5) reading the prepared WebPages. • Step 3: require the students seriously rating the interest on every article they have read from 1 (the least) and 5 (the most).

  22. Preliminary Results and Analysis • Compared with the provided levels by users, the level assessment algorithm has 83.2% accuracy. However, the evaluation means of information delivery are still under development.

  23. Discussion and Future Research • In this paper, we described our current ongoing research on the personalized e-learning systemandthe prototype system and preliminary results are presented. • However, the fourth indicator – relational index is not testified in the experiments yet. • Furthermore, the usability of the system has not been fully verified by the end users, especially for the quality of information delivery. • On the other hand, although ontology knowledge is used for the content classification, the structure of it could be much more complicated and the usage of it can be extended to the feedback extracting and user profiling.

  24. Research on Personalized E-Learning System UsingFuzzySetBased Clustering Algorithm F. Lu, X. Li, Q. Liu, Z. Yang, G. Tan, and T. He, Engineering & Research Center for Information Technology on Education, Huazhong Normal University, China Science and Technology of Education Department, Hunan University Of Science and Engineering, China In Proceedings of the 7th international conference on Computational Science, ICCS, 2007, pp. 587–590 Presenter: Hsiao-Pei Chang

  25. Outline • Abstract • Personalized E-Learning System • The Key Techniques and Algorithms • Calculate Difficulty Parameters of Course Materials Dynamically • Estimate the Level of Learner Abilities • Conclusion & My Comments

  26. Abstract • Personalized service is becoming increasingly important, especially in E-learning field. • only take learners preferences, interests and browsing behaviors into consideration. • neglect considering whether the learners ability and the difficulty level of recommended learning materials are matched to each other or not. • This paper proposes a personalized E-learning system using fuzzy set based clustering algorithm which considers both course materials’ difficulty and learners’ ability to provide appropriate learning stuffs for learners individually, to help learners learn more efficiently and effectively.

  27. Personalized E-Learning System- The Key Techniques and Algorithms • Calculate Difficulty Parameters of Course Materials Dynamically • The course materials’ difficulty level is classified into five points: {very hard, hard, moderate, easy, and very easy}, depicted as Dif1, Dif2, Dif3, Dif4, Dif5. • Dexp denotes the experts’ decision of material difficulty level, which can be calculated in formula (1). • The materials difficulty parameters assessed by the learners and teachers can be calculated by formula (2), marked as Dstu and Dtea respectively. • The adjusted difficulty parameter of a certain material is calculated by formula (3): Where w1 and w2 (0<1-w1-w2<w2<w1 <1) denote the weight of the course difficulty parameters

  28. Personalized E-Learning System- The Key Techniques and Algorithms • Estimate the Level of Learner Abilities • In E-learning systems, there are many variables that determine the learners’ ability • We form a factor set for the system, which is depicted as: U={u1,u2,u3,u4,u5}, • u1 denotes the time spent on each material • u2 denotes questionnaire feedback after each material • u3 denotes quiz feedback after each material • u4 denotes times clicking on antecedent material links • u5 denotes random mouse move and click.

  29. ⑴ u1: the time spent on each material, with the weight w1. • The assessment set of u1 is depicted as V1= {very long, long, moderate, short, veryshort}, the corresponding weight set is depicted as A={a1,a2,a3,a4,a5}. Ti represents alearner’s browsing time on a certain material. • We compute the learners’ averagebrowsing time on a certain material, depicted as formula (4). • Notably, the Tis which are too big and too small are excluded. • Then we can use statistic method to calculate the ratio of each assessment element inassessment set V1 for a certain material. The variable b denotes the bias of the time period.

  30. The membership degree ofdifferent time range corresponding to V1 is defined as formula (6): Where • a1 denotes one of the assessment set element {very long}, • T1, T2,…, Ti denotethe learners that consume the time between Trange|b=c1 (computing Trange when b=c1)and Trange|b=c2 • the numerator lT1+lT2+…+lTi denotes the number of a group oflearners of fore mentioned kind • n denotes the total learners that has browsed thematerial.

  31. ⑵ u2: questionnaire feedback after each material, with the weight w2. • The assessment set of u2 is depicted as V2={completely understand, understand,moderate, little of understanding, completely not understand}, and its correspondingweight set is depicted as B={b1,b2,b3,b4,b5}. ⑶ u3: quiz feedback after each material, with the weight w3. • V3={very good, good, moderate, bad, verybad}, and weight set C={c1,c2,c3,c4,c5}. ⑷ u4: times clicking on antecedent material links, with the weight w4. • V4={very many, many, moderate, few, veryfew} ⑸ u5: random mouse move and click, with the weight w5. • V5={very many, many, moderate, few, veryfew}

  32. Calculate the learners’ abilities For example : • The factor weight set W={0.2, 0.35, 0.3, 0.1, 0.05}, assume that we have the result data of a learner X, whose vector set is calculated already, his/her vector set result is vx=(0.9,0.5,0.8,0.8,0.8). The learner X’s ability is calculated as formula (7): • Before we rank the learner X’s ability level, we have to classify the number 0~1 into five points: {Very high ability, high ability, moderate ability, low ability, very low ability}. • X’s ability result is 0.715 -> { high ability }

  33. Conclusion&Comments • The system provides personalized learning according to course materials visited bylearners and their responses • It provides personalized course material recommendations based on learnersability, and moreover accelerate learners’ learning efficiency and effectiveness. • Our current personalize method focus on the degree of change in students’ vocabulary , grammar and reading level after tests. • Consider whether to adopt the information fusion  and fuzzy techniques into our current model or not.

More Related