<h3align="center">LSM Group: Knowledge Graphs - Mini Project - Summer Term 2021</h3>

<palign="left">

This repository is the presentation of our mini-project for the Foundations of Knowledge Graphs lecture at Paderborn University in Germany.

We were provided with 25 learning problems from the Carcinogenesis dataset, each having included and excluded components.The task has been to develop a classifier that can determine the carcinogenicity of new components based on the learning problems from the Carcinogenesis dataset.

This repository represents our work regarding the mini-project for the Foundations of Knowledge Graphs lecture at Paderborn University in Germany. We were provided with 25 learning problems from the Carcinogenesis dataset, each having included and excluded components. The task has been to develop a classifier that can determine the carcinogenicity of new components based on the learning problems from the Carcinogenesis dataset.

</p>

</p>

...

...

@@ -45,28 +43,38 @@

<!-- APPROACH -->

## Approach

We decided on using embeddings to represent the carciogenesis dataset in an efficient form.

We decided on using embeddings to represent the carcinogenesis dataset in an efficient form.

This was done using the PyKeen library, which offers a myriad of different embedding models.

Further it can be configured with different parameters like the number of epochs, or the dimension

of the generated embedding.

Further, it can be configured with different parameters like the number of epochs, or the dimension

of the generated embedding. In our test, the embedding model "TransR" worked best with our approach.

TransR is a translation based approach similar to TransE with the addition, that it represents relations

and entities in different vector spaces, thereby increasing the spatial distance between instances.

To make predictions using these embeddings, we first used typical machine learning algorithms such as

random forests, logistic regression, or clustering algorithms such as kNN. In doing so, we encountered

the problem that very many of the learning problems have a very unbalanced ratio of positive and negative

(included and excluded) instances.

For learning problems that had an extremely high proportion of negative (excluded) instances,

the classification algorithms classified all instances as negative, since these mostly optimize the accuracy

the classification algorithms classified all instances as negative since these mostly optimize the accuracy

instead of the F1 score.

To overcome this problem, we tried to balance the training data before the training. Since undersampling,

with a very small amount of positive instances leads to a very small training data set,

we therefore decided to oversample. The oversampling algorithm we used is the SMOTE implementation of the sklearn extension

imbalenced-learn (https://github.com/scikit-learn-contrib/imbalanced-learn). In simple terms, SMOTE calculates

with a very small amount of positive instances, leads to a very small training data set,

we, therefore, decided to oversample. The oversampling algorithm we used is the SMOTE implementation of the sklearn extension

imbalanced-learn (https://github.com/scikit-learn-contrib/imbalanced-learn). In simple terms, SMOTE calculates

new synthetic data points for the smaller class, each of which lies on the line between two data points of this class.

Using this technique and a Linear SVM, we were able to at least slightly improve the problem of overweighting the negative class.

Using this we could achieve F1-Scores ranging from \<lower_bound> up to \<higher_bound> for the given test lps.

We split the data in to learning and test in a ratio of \<ratio>.

We split the data into learning and test in a ratio of \<ratio>.

## Other approaches

We tried out several different approaches to tackle the given task of classifying entities. These approaches can be found

in the folder "other_approaches" as Jupyter notebooks.

### SKLEARN Clustering

In the notebook "dbscan_clustering.ipynb" we explored the possibility to use clustering algorithms defined in sklearn to classify the given entities. Here we choose DBSCAN, as SKLEARN states it working well with imbalanced datasets. Unfortunately, the approach did not yield good results and was therefore no longer pursued.

### PyTorch Geometric Graph Neural Network

A second approach was the implementation of a graph neural network from the library pytorch_geometric, i.e. a deep learning approach. The idea was to use a graph neural network for classification based on the labels of the learning problems and the edges of the knowledge graph. The first step was to fit the network using the train data and the CrossEntropyLoss as metric and, after that, classify all individuals (even the ones used for training). The network computes a probability distribution over the labels for each individual and the individuals are assigned to the class with the highest probability. However, since the data are very imbalanced, all individuals are assigned to the negative (excluded) class and the F1-score was not very meaningful. Unfortunately, it was not possible to find a solution for this problem, hence this approach was no longer in our interest.

<!-- PREREQISITES -->

### Prerequisites

...

...

@@ -97,44 +105,17 @@ We split the data in to learning and test in a ratio of \<ratio>.