Skip to content
This repository was archived by the owner on Oct 8, 2020. It is now read-only.

Commit 800d2b2

Browse files
Merge branch 'develop' of github.com:SANSA-Stack/SANSA-Inference into develop
2 parents 03c91c9 + 2f30916 commit 800d2b2

1 file changed

Lines changed: 105 additions & 4 deletions

File tree

README.md

Lines changed: 105 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,121 @@ Contains the core Inference API based on Apache Flink.
1515
### sansa-inference-tests
1616
Contains common test classes and data.
1717

18-
## Usage
18+
19+
## Setup
20+
### From source
21+
22+
To install the SANSA Inference API, you need to download it via Git and install it via Maven.
23+
```shell
24+
git clone https://github.com/SANSA-Stack/SANSA-Inference.git
25+
cd SANSA-Inference
26+
mvn clean install
1927
```
20-
Usage: RDFGraphMaterializer [options]
28+
Afterwards, you have to add the dependency to your pom.xml
29+
30+
For Apache Spark
31+
```xml
32+
<dependency>
33+
<groupId>net.sansa-stack</groupId>
34+
<artifactId>sansa-inference-spark</artifactId>
35+
<version>0.1.0-SNAPSHOT</version>
36+
</dependency>
37+
```
38+
For Apache Flink
39+
```xml
40+
<dependency>
41+
<groupId>net.sansa-stack</groupId>
42+
<artifactId>sansa-inference-flink</artifactId>
43+
<version>0.1.0-SNAPSHOT</version>
44+
</dependency>
45+
```
46+
### Using Maven pre-build artifacts
47+
48+
1. Add AKSW Maven repository to your pom.xml (will be added to Maven Central soon)
49+
```xml
50+
<repository>
51+
<id>maven.aksw.snapshots</id>
52+
<name>University Leipzig, AKSW Maven2 Repository</name>
53+
<url>http://maven.aksw.org/archiva/repository/snapshots</url>
54+
<releases>
55+
<releases>
56+
<enabled>false</enabled>
57+
</releases>
58+
<snapshots>
59+
<enabled>true</enabled>
60+
</snapshots>
61+
</repository>
62+
63+
<repository>
64+
<id>maven.aksw.internal</id>
65+
<name>University Leipzig, AKSW Maven2 Internal Repository</name>
66+
<url>http://maven.aksw.org/archiva/repository/internal</url>
67+
<releases>
68+
<enabled>true</enabled>
69+
</releases>
70+
<snapshots>
71+
<enabled>false</enabled>
72+
</snapshots>
73+
</repository>
74+
```
75+
2. Add dependency to your pom.xml
2176

77+
For Apache Spark
78+
```xml
79+
<dependency>
80+
<groupId>net.sansa-stack</groupId>
81+
<artifactId>sansa-inference-spark</artifactId>
82+
<version>0.1.0-SNAPSHOT</version>
83+
</dependency>
84+
```
85+
For Apache Flink
86+
```xml
87+
<dependency>
88+
<groupId>net.sansa-stack</groupId>
89+
<artifactId>sansa-inference-flink</artifactId>
90+
<version>0.1.0-SNAPSHOT</version>
91+
</dependency>
92+
```
93+
### Using SBT
94+
SANSA Inference API has not been published on Maven Central yet, thus, you have to add an additional repository as follows
95+
```scala
96+
resolvers ++= Seq(
97+
"AKSW Maven Releases" at "http://maven.aksw.org/archiva/repository/internal",
98+
"AKSW Maven Snapshots" at "http://maven.aksw.org/archiva/repository/snapshots"
99+
)
100+
```
101+
Then you have to add a dependency on either the Apache Spark or the Apache Flink module.
102+
103+
For Apache Spark add
104+
```scala
105+
"net.sansa-stack" % "sansa-inference-spark" % VERSION
106+
```
22107

108+
and for Apache Flink add
109+
```scala
110+
"net.sansa-stack" % "sansa-inference-flink" % VERSION
111+
```
112+
where, `VERSION` is the released version you want to use of the Inference API.
113+
114+
## Usage
115+
```
116+
RDFGraphMaterializer 0.1.0
117+
Usage: RDFGraphMaterializer [options]
118+
119+
23120
-i <file> | --input <file>
24121
the input file in N-Triple format
25122
-o <directory> | --out <directory>
26123
the output directory
27-
-p {rdfs | owl-horst | owl-el | owl-rl} | --profile {rdfs | owl-horst | owl-el | owl-rl}
124+
--single-file
125+
write the output to a single file in the output directory
126+
--sorted
127+
sorted output of the triples (per file)
128+
-p {rdfs | owl-horst} | --profile {rdfs | owl-horst}
28129
the reasoning profile
29130
--help
30131
prints this usage text
31132
```
32133
### Example
33134

34-
`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/OUTPUT_DIRECTORY -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory.
135+
`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.

0 commit comments

Comments
 (0)