Archive 1.x:Multi-structWSF Server Instance

Introduction
This tutorial shows you how to create a multi-structWSF server instance. The goal of having multiple structWSF instance on the same server instance is to have totally independent websites that uses different structWSF & OSF-Drupal instances and datasets. This means, for example, that you can create a development instance and a production instance on the same physical server instance, without worrying to modify anything on your production instance when you play with the development one.

So far, we had two kind of setups available:


 * 1) One structWSF and one OSF-Drupal instance on the same server
 * 2) One structWSF instance for multiple OSF-Drupal instances on the same server
 * 3) And this tutorial adds a new one: multiple structWSF instances for multiple OSF-Drupal instances on the same server

This tutorial is an extension to the main StructWSF Installation Guide.

Any instance that has been setuped as #1 or #2 can be enhanced to #3 without too much work. This tutorial will guide the user to tell him what settings to modify, what folder structure to change, and what software to re-configure. We will start by changing the settings and folder structure of the structWSF & OSF-Drupal instances. Then we will modify the configuration of some underlying software such as Solr, Apache 2, and so on.

The following conventions are used in this document:


 * Black text is used for information and directions.
 * Brown text represents commands that should be copied and pasted in Terminal
 * Green text is text to be cut and paste into a text editor (vim, vi, pico, etc.)
 * Orange text represents responses from the computer
 * Red Text represents text in a command that you may wish to change to reflect your own names
 * Purple text represents something to notice.

Apache 2
The first step is to setup Apache 2 so that it recognize all the domains that will be bound to structWSF instances.

The first thing will possibly to remove the "default" file out of the "sites-enabled" folder if you used the default installation process. This file is probably located here:

 /etc/apache2/sites-enabled/default

The next step is to create one apache configuration file per domain names.

For the structWSF/conStruct instance A:

 vim /etc/apache2/sites-available/a.mystructwsf.com.conf

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

 NameVirtualHost *

 DocumentRoot /usr/share/drupal ServerName a.mystructwsf.com ServerAdmin webmaster@a.mystructwsf.com

CustomLog /usr/share/drupal/sites/a.mystructwsf.com/logs/a.mystructwsf.com.access.log combined ErrorLog /usr/share/drupal/sites/a.mystructwsf.com/logs/a.mystructwsf.com.error.log

 Options FollowSymLinks MultiViews DirectoryIndex index.php AllowOverride All Order allow,deny allow from all 

Alias /ws /usr/share/structwsf_instance_A

 Options +FollowSymLinks AllowOverride All order allow,deny allow from all 



For the structWSF/conStruct instance B:

 vim /etc/apache2/sites-available/b.mystructwsf.com.conf

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

 NameVirtualHost *

<VirtualHost *> DocumentRoot /usr/share/drupal ServerName b.mystructwsf.com ServerAdmin webmaster@b.mystructwsf.com

CustomLog /usr/share/drupal/sites/b.mystructwsf.com/logs/b.mystructwsf.com.access.log combined ErrorLog /usr/share/drupal/sites/b.mystructwsf.com/logs/b.mystructwsf.com.error.log

 Options FollowSymLinks MultiViews DirectoryIndex index.php AllowOverride All Order allow,deny allow from all </Directory>

Alias /ws /usr/share/structwsf_instance_B

 Options +FollowSymLinks AllowOverride All order allow,deny allow from all </Directory>

</VirtualHost>

Once these new configuration files are created, we have to create a symbolic link in the sites-enabled folder:

<pre style="color:#660000"> ln -s /etc/apache2/sites-available/a.mystructwsf.com /etc/apache2/sites-enabled/a.mystructwsf.com ln -s /etc/apache2/sites-available/b.mystructwsf.com /etc/apache2/sites-enabled/b.mystructwsf.com

With the configuration files above, the OSF-Drupal instance will be accessible from this URL: http://a.mystructwsf.com and the structWSF endpoints will be accessible from this URL: http://a.mystructwsf.com/ws/*

Finally, make sure to restart Apache 2 so that we can use it to recreate the ontological structure files bellow:

<pre style="color:#660000"> /etc/init.d/apache2 restart

structWSF
The main purpose of this tutorial is to have multiple structWSF instances running independently on the same server instance. To do so, we have to re-configure some settings, to change some folder structure and to re-configure a few software used by any structWSF instance.

structWSF Folder Structure
Now that we want multiple structWSF instances running on the same server, it means that we need to have multiple code-base, one for each instance. We suggest to re-use the /usr/share/structwsf/ folder that we created in the installation guide, and to create a new folder for each new instance we want on that server.

Lets create the new folders for two structWSF instances:

<pre style="color:#660000"> mkdir -p /usr/share/structwsf/structwsf_instance_A/ mkdir -p /usr/share/structwsf/structwsf_instance_B/

The next step is to put the structWSF codebase in each folder. You can re-download the codebase, or simply duplicate the instance you configured in the installation guide in both folders.

Data Configuration Files
The first thing to modify are the structWSF data.ini configuration files. With a normal structWSF installation setting, users should have this configuration file folder structure:

<pre style="color:#993399"> /usr/share/structwsf/data.ini

For a multi-structWSF instance, we recommend to put a root "data_ini" folder in the main "data" folder. Then the data.ini file of each structWSF instance will be put in a different folder. This way, the data configuration files of each instance will be easier to backup if they are part of the main data folder. The new structure should looks like:

<pre style="color:#993399"> /data/data_ini/structwsf_instance_A/data.ini /data/data_ini/structwsf_instance_B/data.ini

The next step is to create these two different data.ini configuration files, one for each instance:

<pre style="color:#660000"> mkdir -p /data/data_ini/structwsf_instance_A/ mkdir -p /data/data_ini/structwsf_instance_B/

Then we have to create one data.ini file for each instance:

For the structWSF instance A:

<pre style="color:#660000"> vim /data/data_ini/structwsf_instance_A/data.ini

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

<pre style="color:#0ecc50">
 * structWSF Data Configuration File
 * All the settings defined in this configuration file are related to the data
 * archived in different datastores of a structWSF instance. We split the
 * structWSF configuration files in two: one for the settings related to
 * the data (this file), and one for the settings related to the network
 * that runs the structWSF instance.
 * This decision has been taken to helps syste administrators spliting the concerns
 * between managing the data of a structWSF instance, and its software. Think
 * about an Amazon EC2/EBS setting where the dabases of the structWSF datastores
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.

[datasets]

wsf_graph = "http://a.mystructwsf.com/wsf/";
 * The base URI of the graph where the structWSF structure description get indexed

dtd_base_url = "http://a.mystructwsf.com/ws/dtd/"
 * DTD base URL where to resolve DTD used to share data

[ontologies]

ontologies_files_folder = "/data/ontologies/files/a.mystructwsf.com/"
 * Ontologies description files (in RDFS and OWL)

ontological_structure_folder = "/data/ontologies/structure/a.mystructwsf.com/"
 * structWSF ontological structure

[triplestore]

username = "dba"
 * Username used to connect to the triple store instance

password = "dba"
 * Password used to connect to the triple store instance

dsn = "structwsf-triples-store"
 * DSN used to connect to the triple store instance

host = "localhost"
 * Host used to connect to the triple store instance

log_table = "SD.WSF.ws_queries_log_structwsf_a"
 * Name of the logging table on the Virtuoso instance

port = "8890"
 * Port number where the triple store server is reachable

[solr]

wsf_solr_core = "structwsf_a"
 * The core to use for Solr; Use "" (double, double-quotes) when the "multicore"
 * mode is not used

host = "localhost"
 * Host used to connect to the solr instance

solr_auto_commit = "FALSE"
 * Auto commit handled by the Solr data management systems. If this parameter is true,
 * then this means Solr will handle the commit operation by itself. If it is false, then the
 * web services will trigger the commit operations. Usually, Auto-commit should be handled
 * by Solr when the size of the dataset is too big, otherwise operation such as delete could
 * take much time.

port = "8983"
 * Port number where the Solr store server is reachable

fields_index_folder = "/tmp/"
 * This is the folder there the file of the index where all the fields defined in Solr
 * are indexed. You have to make sure that the web server has write access to this folder.
 * This folder path has to end with a slash "/".

For the structWSF instance B:

<pre style="color:#660000"> vim /data/data_ini/structwsf_instance_B/data.ini

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

<pre style="color:#0ecc50">
 * structWSF Data Configuration File
 * All the settings defined in this configuration file are related to the data
 * archived in different datastores of a structWSF instance. We split the
 * structWSF configuration files in two: one for the settings related to
 * the data (this file), and one for the settings related to the network
 * that runs the structWSF instance.
 * This decision has been taken to helps syste administrators spliting the concerns
 * between managing the data of a structWSF instance, and its software. Think
 * about an Amazon EC2/EBS setting where the dabases of the structWSF datastores
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.

[datasets]

wsf_graph = "http://b.mystructwsf.com/wsf/";
 * The base URI of the graph where the structWSF structure description get indexed

dtd_base_url = "http://b.mystructwsf.com/ws/dtd/"
 * DTD base URL where to resolve DTD used to share data

[ontologies]

ontologies_files_folder = "/data/ontologies/files/b.mystructwsf.com/"
 * Ontologies description files (in RDFS and OWL)

ontological_structure_folder = "/data/ontologies/structure/b.mystructwsf.com/"
 * structWSF ontological structure

[triplestore]

username = "dba"
 * Username used to connect to the triple store instance

password = "dba"
 * Password used to connect to the triple store instance

dsn = "structwsf-triples-store"
 * DSN used to connect to the triple store instance

host = "localhost"
 * Host used to connect to the triple store instance

log_table = "SD.WSF.ws_queries_log_structwsf_b"
 * Name of the logging table on the Virtuoso instance

port = "8890"
 * Port number where the triple store server is reachable

[solr]

wsf_solr_core = "structwsf_b"
 * The core to use for Solr; Use "" (double, double-quotes) when the "multicore"
 * mode is not used

host = "localhost"
 * Host used to connect to the solr instance

solr_auto_commit = "FALSE"
 * Auto commit handled by the Solr data management systems. If this parameter is true,
 * then this means Solr will handle the commit operation by itself. If it is false, then the
 * web services will trigger the commit operations. Usually, Auto-commit should be handled
 * by Solr when the size of the dataset is too big, otherwise operation such as delete could
 * take much time.

port = "8983"
 * Port number where the Solr store server is reachable

fields_index_folder = "/tmp/"
 * This is the folder there the file of the index where all the fields defined in Solr
 * are indexed. You have to make sure that the web server has write access to this folder.
 * This folder path has to end with a slash "/".

Note that some of these settings will be discussed bellow in different sections.

The next step is to change the WebService.php file to point to these new data.ini configuration files, and this, for each instance:

For the instance A:

<pre style="color:#660000"> vim /usr/share/structwsf/stuctwsf_instance_A/framework/WebService.php

For lines 34 and 37, change them for:

<pre style="color:#0ecc50"> /*! @brief data.ini file folder */ public static $data_ini = "/data/data_ini/strucwsf_instance_A/";

/*! @brief network.ini file folder */ public static $network_ini = "/usr/share/structwsf/structwsf_instance_A/";

For the instance B:

<pre style="color:#660000"> vim /usr/share/structwsf/stuctwsf_instance_B/framework/WebService.php

For lines 34 and 37, change them for:

<pre style="color:#0ecc50"> /*! @brief data.ini file folder */ public static $data_ini = "/data/data_ini/strucwsf_instance_B/";

/*! @brief network.ini file folder */ public static $network_ini = "/usr/share/structwsf/structwsf_instance_B/";

Network Configuration Files
The next step is to modify the network related configuration files.

Make sure that you have the network.ini files in each of the structwsf instance folder we created:

<pre style="color:#993399"> /usr/share/structwsf/structwsf_instance_A/network.ini /usr/share/structwsf/structwsf_instance_B/network.ini

Then make sure to properly configure each of these files.

Then we have to create one data.ini file for each instance:

For the structWSF instance A:

<pre style="color:#660000"> vim /usr/share/structwsf/structwsf_instance_A/network.ini

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

<pre style="color:#0ecc50">
 * structWSF Network Configuration File
 * All the settings defined in this configuration file are related to the network
 * supporting the structWSF instance. We split the structWSF configuration
 * files in two: one for the settings related to the data (this file), and one for
 * the settings related to the network that runs the structWSF instance.
 * This decision has been taken to helps syste administrators spliting the concerns
 * between managing the data of a structWSF instance, and its software. Think
 * about an Amazon EC2/EBS setting where the dabases of the structWSF datastores
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.

[network]

wsf_base_url = "http://a.mystructwsf.com"
 * Base URL used to access the structWSF instance

wsf_base_path = "/usr/share/structwsf/structwsf_instance_A/"
 * Local server path of the structWSF instance

wsf_local_ip = "127.0.0.1";
 * Local server IP address of the structWSF instance

For the structWSF instance B:

<pre style="color:#660000"> vim /usr/share/structwsf/structwsf_instance_B/network.ini

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

<pre style="color:#0ecc50">
 * structWSF Network Configuration File
 * All the settings defined in this configuration file are related to the network
 * supporting the structWSF instance. We split the structWSF configuration
 * files in two: one for the settings related to the data (this file), and one for
 * the settings related to the network that runs the structWSF instance.
 * This decision has been taken to helps syste administrators spliting the concerns
 * between managing the data of a structWSF instance, and its software. Think
 * about an Amazon EC2/EBS setting where the dabases of the structWSF datastores
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.
 * are hosted on a EBS volume that can be attached to different running EC2
 * instances.

[network]

wsf_base_url = "http://b.mystructwsf.com"
 * Base URL used to access the structWSF instance

wsf_base_path = "/usr/share/structwsf/structwsf_instance_B/"
 * Local server path of the structWSF instance

wsf_local_ip = "127.0.0.1";
 * Local server IP address of the structWSF instance

Log Settings
Each structWSF instance has its own logging capabilities. You setuped the name of the Virtuoso logging table in the "Data Configuration Files" section. Now, we have to create the two tables in the Virtuoso instance.

To create the new logging tables, open Virtuoso in a Web browser by clicking on this link, which will open Virtuoso Conductor:

http://mysite_address.com:8890/conductor

Log in. The default login is:

Account: dba Password: dba

Click on the "Interactive SQL (ISQL)" link at the top of the left sidebar. Then copy, paste and run this SQL query in the iSQL window:

<pre style="color:#0ecc50"> create table "SD"."WSF"."ws_queries_log_structwsf_a" ( "id" INTEGER IDENTITY,  "requested_Web_service" VARCHAR,  "requester_ip" VARCHAR,  "request_parameters" VARCHAR,  "requested_mime" VARCHAR,  "request_datetime" DATETIME,  "request_processing_time" DECIMAL,  "request_http_response_status" VARCHAR,  "requester_user_agent" VARCHAR,  PRIMARY KEY ("id") ); create index sd_wsf_a_requested_Web_service_index on SD.WSF.ws_queries_log_structwsf_a (requested_Web_service); create index sd_wsf_a_requester_ip_index on SD.WSF.ws_queries_log_structwsf_a (requester_ip); create index sd_wsf_a_requested_mime_index on SD.WSF.ws_queries_log_structwsf_a (requested_mime); create index sd_wsf_a_request_datetime_index on SD.WSF.ws_queries_log_structwsf_a (request_datetime); create index sd_wsf_a_request_http_response_status_index on SD.WSF.ws_queries_log_structwsf_a (request_http_response_status); create index sd_wsf_a_requester_user_agent_index on SD.WSF.ws_queries_log_structwsf_a (requester_user_agent);

create table "SD"."WSF"."ws_queries_log_structwsf_b" ( "id" INTEGER IDENTITY,  "requested_Web_service" VARCHAR,  "requester_ip" VARCHAR,  "request_parameters" VARCHAR,  "requested_mime" VARCHAR,  "request_datetime" DATETIME,  "request_processing_time" DECIMAL,  "request_http_response_status" VARCHAR,  "requester_user_agent" VARCHAR,  PRIMARY KEY ("id") ); create index sd_wsf_b_requested_Web_service_index on SD.WSF.ws_queries_log_structwsf_b (requested_Web_service); create index sd_wsf_b_requester_ip_index on SD.WSF.ws_queries_log_structwsf_b (requester_ip); create index sd_wsf_b_requested_mime_index on SD.WSF.ws_queries_log_structwsf_b (requested_mime); create index sd_wsf_b_request_datetime_index on SD.WSF.ws_queries_log_structwsf_b (request_datetime); create index sd_wsf_b_request_http_response_status_index on SD.WSF.ws_queries_log_structwsf_b (request_http_response_status); create index sd_wsf_b_requester_user_agent_index on SD.WSF.ws_queries_log_structwsf_b (requester_user_agent);

Interactive SQL will report: <pre style="color:#ff3200"> The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set. The statement execution did not return a result set.

Ontologies Folder
Previously, you setuped the name of the two different ontologies related folders in the "Data Configuration Files" section. Now, we have to create the folder structure on the server instance.

Two main folders are related to the ontologies on an instance server:


 * files: which is the folder where all the OWL & RDFS files are saved for processing with the load_ontologies.php script.
 * structure: which is the folder where the internal ontological structure used by both structWSF and OSF-Drupal is saved.

Now we want to change that basic structure to put each of these folder within instance-related parent folder, such as:

<pre style="color:#660000"> mkdir -p /data/ontologies/files/a.mystructwsf.com/ mkdir -p /data/ontologies/structure/a.mystructwsf.com/

mkdir -p /data/ontologies/files/b.mystructwsf.com/ mkdir -p /data/ontologies/structure/b.mystructwsf.com/

Then the final step is to put the OWL or RDFS files of the ontologies you want to load, for each instance, in the related /files/ folders.

The final step is to re-create the structure files by running the load_ontologies.php scripts. You can do it by running these commands:

<pre style="color:#660000"> cd /usr/share/structwsf/structwsf_instance_A/ php load_ontologies.php

cd /usr/share/structwsf/structwsf_instance_B/ php load_ontologies.php

OSF-Drupal
Since conStuct is a Drupal module, the system administrator only has to follow the Drupal documentation to setup a Drupal multi-sites instance, which is a native feature of Drupal. All the information about drupal multi-sites setups is available from the Multi-site how-tos page.

You will see bellow how to configure Apache 2 to properly handle the domain names resolution. However, the multiple OSF-Drupal instances should be created that way:

The first step is to put all the modules OSF-Drupal depends on in this folder:

<pre style="color:#993399"> /usr/share/drupal/sites/all/modules/

The /sites/ folder is where all the multi-sites are hosted by the drupal instance. When drupal get initialized, it perform a cascade of checks to see if multi-sites are available, and then resolve everything until it gets to the simpler setup: a single default Drupal instance.

The /sites/all/modules/ folder is the folder where all the modules shares amongst all sites are hosted.

Then, we have to create one folder per Drupal/conStruct instance. The name of the folders has to be the same as the domain name (domain & sub-domains if available) of the sites. For this tutorial, we have to create these folders:

<pre style="color:#660000"> mkdir -p /usr/share/drupal/sites/a.mystructwsf.com/ mkdir -p /usr/share/drupal/sites/b.mystructwsf.com/

Then, you have to put your Drupal site in each folder, and the OSF-Drupal module in each of these folders:

<pre style="color:#993399"> /usr/share/drupal/sites/a.mystructwsf.com/modules/ /usr/share/drupal/sites/b.mystructwsf.com/modules/

Finally, make sure to copy/paste, or to create a symbolic link, from the ontological structure files you created above, in the ontologies folder of both sites:

<pre style="color:#993399"> /usr/share/drupal/sites/a.mystructwsf.com/modules/conStruct/framework/ontologies/ /usr/share/drupal/sites/b.mystructwsf.com/modules/conStruct/framework/ontologies/

Solr
There is a feature in Solr that enable people to run multiple Solr indexes on the same running instance. It is like having multiple databases on the same running DBMS system. This feature is called multicores.

The first thing to do is to make sure that the folder we will use to install the multiple cores is created:

<pre style="color:#660000"> mkdir -p /usr/share/solr/example/multicore/

In that folder, we will create one folder per core we want to run on the multicore instance. Lets create the two that we need for this tutorial:

<pre style="color:#660000"> mkdir -p /usr/share/solr/example/multicore/structwsf_instance_A/conf/ mkdir -p /usr/share/solr/example/multicore/structwsf_instance_B/conf/

The next step is for you to duplicate the conf folder of your normal instance in both folders above (we will use it as the basis of our multicores instance):

<pre style="color:#660000"> cp -a /usr/share/solr/example/solr/conf/* /usr/share/solr/example/multicore/structwsf_instance_A/conf/ cp -a /usr/share/solr/example/solr/conf/* /usr/share/solr/example/multicore/structwsf_instance_B/conf/

Then we have to create the configuration file that tells Solr where the cores are located on the server instance. This configuration file is in the multicore folder:

<pre style="color:#660000"> vim /usr/share/solr/example/multicore/solr.xml

Then you have to copy and paste this configuration file into the vim window, and to modify it for your needs if needed:

<pre style="color:#0ecc50"> <?xml version="1.0" encoding="UTF-8" ?>

<cores adminPath="/admin/cores"> <core name="structwsf_instance_A" instanceDir="structwsf_instance_A"> <property name="dataDir" value="/data/solr/structwsf_instance_A" /> <core name="structwsf_instance_B" instanceDir="structwsf_instance_B"> <property name="dataDir" value="/data/solr/structwsf_instance_B" />

Then, within each core folder, we have to edit the solrconfig.xml file on line 72 to change the default data directory.

<pre style="color:#660000"> vim /usr/share/solr/example/multicore/structwsf_instance_A/solrconfig.xml

Then, change this line:

<pre style="color:#0ecc50"> <dataDir>${solr.data.dir:./solr/data}</dataDir>

by this one:

<pre style="color:#0ecc50"> <dataDir>${solr.data.dir:/data/solr/structwsf_instance_A}</dataDir>

Do the same for the instance B:

<pre style="color:#660000"> vim /usr/share/solr/example/multicore/structwsf_instance_B/solrconfig.xml

<pre style="color:#0ecc50"> <dataDir>${solr.data.dir:/data/solr/structwsf_instance_B}</dataDir>

The final step is to re-create the solr running script to start Solr with the multicores, and not the single instance.

<pre style="color:#660000"> rm -rf /etc/init.d/solr vim /etc/init.d/solr

Then copy and paste this code into Vim:

<pre style="color:#0ecc50"> set -e PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
 * 1) !/bin/bash
 * 2) description: Starts and stops Solr production

SOLR_DIR=/usr/share/solr/example/

JAVA_OPTIONS="-Dsolr.solr.home=./multicore -server -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -Xmx512M -Xms64M -jar start.jar"
 * 1) Multi core support

JAVA="/usr/bin/java" LOG_FILE="/var/log/solr/console.log" case $1 in start) echo "Starting Solr" cd $SOLR_DIR cd $SOLR_DIR
 * 1) su - solr -c "cd $SOLR_DIR && nohup $JAVA $JAVA_OPTIONS 2> $LOG_FILE &"
 * 2) TEST="cd $SOLR_DIR && nohup $JAVA $JAVA_OPTIONS 2> $LOG_FILE &"

EXEC="$JAVA $JAVA_OPTIONS" nohup $EXEC &

echo "ok - remember it may take a minute or two before Solr responds on requests" exit stop) echo "Stopping Solr" cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop   echo "ok" restart) $0 stop sleep 3 $0 start echo "Usage: $0 {start|stop|restart}" >&2exit 1 esac

Then change the permission of this run script to be able to run it:

<pre style="color:#660000"> chmod 755 solr

Restarting
The final step is to restart all software so that they takes their new settings into considerations. It can simply be done by using the running scripts of each software:

<pre style="color:#660000"> /etc/init.d/apache2 restart /etc/init.d/virtuoso restart /etc/init.d/solr restart /etc/init.d/mysql restart