Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Inductive logic programming
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Setting == Inductive logic programming has adopted several different learning settings, the most common of which are learning from [[entailment]] and learning from interpretations.<ref name="setting">{{Cite journal |last1=Cropper |first1=Andrew |last2=Dumančić |first2=Sebastijan |date=2022-06-15 |title=Inductive Logic Programming At 30: A New Introduction |journal=Journal of Artificial Intelligence Research |volume=74 |pages=779{{endash}}782 |doi=10.1613/jair.1.13507 |issn=1076-9757 |doi-access=free|arxiv=2008.07912 }}</ref> In both cases, the input is provided in the form of ''background knowledge {{mvar|B}}'', a logical theory (commonly in the form of [[Clause (logic)|clauses]] used in [[logic programming]]), as well as positive and negative examples, denoted <math display="inline">E^+</math> and <math display="inline">E^{-}</math> respectively. The output is given as a ''hypothesis'' ''{{mvar|H}}'', itself a logical theory that typically consists of one or more clauses. The two settings differ in the format of examples presented. === Learning from entailment === {{As of|2022}}, learning from entailment is by far the most popular setting for inductive logic programming.<ref name="setting" /> In this setting, the ''positive'' and ''negative'' examples are given as finite sets <math display="inline">E^+</math> and <math display="inline">E^{-}</math> of positive and negated [[Ground expression|ground]] [[Literal (mathematical logic)|literals]], respectively. A ''correct hypothesis'' ''{{mvar|H}}'' is a set of clauses satisfying the following requirements, where the turnstile symbol <math>\models</math> stands for [[logical entailment]]:<ref name="setting" /><ref>{{cite book |last1=Džeroski |first1=Sašo |title=Advances in Knowledge Discovery and Data Mining |publisher=MIT Press |year=1996 |editor1-last=Fayyad |editor1-first=U.M. |pages=117–152 See §5.2.4 |chapter=Inductive Logic Programming and Knowledge Discovery in Databases |access-date=2021-09-27 |editor2-last=Piatetsky-Shapiro |editor2-first=G. |editor3-last=Smith |editor3-first=P. |editor4-last=Uthurusamy |editor4-first=R. |chapter-url=http://kt.ijs.si/SasoDzeroski/pdfs/1996/Chapters/1996_InductiveLogicProgramming.pdf |archive-url=https://web.archive.org/web/20210927141157/http://kt.ijs.si/SasoDzeroski/pdfs/1996/Chapters/1996_InductiveLogicProgramming.pdf |archive-date=2021-09-27 |url-status=dead}}</ref><ref>{{Cite journal |last=De Raedt |first=Luc |date=1997 |title=Logical settings for concept-learning |url=https://linkinghub.elsevier.com/retrieve/pii/S0004370297000416 |journal=Artificial Intelligence |language=en |volume=95 |issue=1 |pages=187–201 |doi=10.1016/S0004-3702(97)00041-6}}</ref> <math display="block">\begin{array}{llll} \text{Completeness:} & B \cup H & \models & E^+ \\ \text{Consistency: } & B \cup H \cup E^- & \not\models & \textit{false} \end{array}</math> Completeness requires any generated hypothesis ''{{mvar|H}}'' to explain all positive examples <math display="inline">E^+</math>, and consistency forbids generation of any hypothesis ''{{mvar|H}}'' that is inconsistent with the negative examples <math display="inline">E^{-}</math>, both given the background knowledge ''{{mvar|B}}''. In Muggleton's setting of concept learning,<ref name="setting2">{{cite journal |last1=Muggleton |first1=Stephen |year=1999 |title=Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic |journal=Artificial Intelligence |volume=114 |issue=1–2 |pages=283–296 |doi=10.1016/s0004-3702(99)00067-3 |doi-access=}}; here: Sect.2.1</ref> "completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "''Necessity''", which postulates that ''{{mvar|B}}'' does not entail <math display="inline">E^+</math>, does not impose a restriction on ''{{mvar|H}}'', but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Weak consistency", which states that no contradiction can be derived from <math display="inline">B\land H</math>, forbids generation of any hypothesis ''{{mvar|H}}'' that contradicts the background knowledge ''{{mvar|B}}''. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed.<ref name="setting2" /> === Learning from interpretations === In learning from interpretations, the ''positive'' and ''negative'' examples are given as a set of complete or partial [[Herbrand structure]]s, each of which are themselves a finite set of ground literals. Such a structure ''{{mvar|e}}'' is said to be a model of the set of clauses <math display="inline">B \cup H</math> if for any [[Substitution (logic)|substitution]] <math display="inline">\theta</math> and any clause <math display="inline">\mathrm{head} \leftarrow \mathrm{body}</math> in <math display="inline">B \cup H</math> such that <math display="inline">\mathrm{body}\theta \subseteq e</math>, <math>\mathrm{head}\theta \subseteq e</math> also holds. The goal is then to output a hypothesis that is ''complete,'' meaning every positive example is a model of <math display="inline">B \cup H</math>, and ''consistent,'' meaning that no negative example is a model of <math display="inline">B \cup H</math>.<ref name="setting" />
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Inductive logic programming
(section)
Add topic