High Availability Guide
Introduction #
This article describes how to use Liferay with virtual hosting, load balancing and clustering to ensure high availability for critical-uptime systems.
The Big Picture #
Virtual Hosting #
Example: we will install 3 virtual hosts called: localhost for www.mytcl.com, localhost2 for www.mytte.com, and localhost3 for www.mypmu.com
server.xml #
Add below statement to a host entry localhost2 of the $TOMCAT_HOME/conf/server.xml file:
<Host name="localhost2" debug="0" appBase="webapp2" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Alias>www.mytte.com</Alias> <Alias>mytte.com</Alias> </Host>
Repeat the above steps to add a host entry localhost3 of the $TOMCAT_HOME/conf/server.xml file:
<Host name="localhost3" debug="0" appBase="webapp3" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Alias>www.mypmu.com</Alias> <Alias>mypmu.com</Alias> </Host>
ROOT.xml #
Add a configuration file for the localhost2 (www.mytte.com)
Step 1: mkdir $TOMCAT_HOME/conf/Catalina/localhost2
Step 2: copy $TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml to $TOMCAT_HOME/conf/Catalina/localhost2/ROOT.xml
Step 3: modify "Context path" of $TOMCAT_HOME/conf/Catalina/localhost2/ROOT.xml
as below:
< Context path="" debug="0" reloadable="true" cookies="true" crossContext="false" privileged="false" >
Repeat above steps to add a configuration file for the localhost3 (www.mypmu.com)
Step 1: mkdir $TOMCAT_HOME/conf/Catalina/localhost3
Step 2: copy $TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml to $TOMCAT_HOME/conf/Catalina/localhost3/ROOT.xml
Step 3: modify "Context path" of $TOMCAT_HOME/conf/Catalina/localhost3/ROOT.xml
as below:
< Context path="" debug="0" reloadable="true" cookies="true" crossContext="false" privileged="false" >
A new host #
Add a new host of localhost2 (www.mytte.com)
Step 1: Create a new folder $TOMCAT_HOME/webapp2/ROOT
Step 2: Copy all the content of $TOMCAT_HOME/webapp/ROOT to $TOMCAT_HOME/webapp2/ROOT
Step 3: Edit $TOMCAT_HOME/webapp2/ROOT/WEB-INF/web.xml modify the company_id to mytte.com
Step 4: Create a new folder $TOMCAT_HOME/webapp2/tunnel
Step 5: Copy all the content of $TOMCAT_HOME/webapps/tunnel to /webapp2/tunnel
Repeat above steps to add a new host of localhost3 (www.mypmu.com)
Step 1: Create a new folder $TOMCAT_HOME/webapp3/ROOT
Step 2: Copy all the content of $TOMCAT_HOME/webapps/ROOT to $TOMCAT_HOME/webapp3/ROOT
Step 3: Edit $TOMCAT_HOME/webapp3/ROOT/WEB-INF/web.xml modify the company_id to mypmu.com
Step 4: Create a new folder $TOMCAT_HOME/webapp3/tunnel
Step 5: Copy all the content of $TOMCAT_HOME/webapp/tunnel to /webapp3/tunnel
hosts configuration in OS environment #
Step 1: hosts
In Windows, modify c:/windows/system32/drivers/etc/hosts as below:
127.0.0.1 localhost www.mytcl.com 127.0.0.1 localhost2 www.mytte.com 127.0.0.1 localhost3 www.mypmu.com
In Linux, modify /etc/hosts as below:
127.0.0.1 localhost.localdomain localhost 127.0.0.1 localhost.localdomain localhost2 127.0.0.1 localhost.localdomain localhost3
Step 2: In Linux, check /etc/host.conf like below:
order hosts,bind
Step 3: In Linux, check /etc/nsswitch.conf like below:
hosts: files nisplus nis dns
Shared resources #
Shared resources at $TOMCAT_HOME/common/lib/ext
Step 1: Move all files of $TOMCAT_HOME/liferay/WEB-INF/lib except util-taglib.jar to $TOMCAT_HOME/common/lib/ext
Step 2: And delete duplicated files of $TOMCAT_HOME/webapp2/ROOT/WEB-INF/lib except util-taglib.jar
Repeat step 2 to delete duplicated files of $TOMCAT/webapp3/ROOT/WEB-INF/lib/except util-taglib.jar
Virtual hosting is done #
Step 1: In Linux, change directory to $TOMCAT_HOME/tomcat/bin and run startup.sh
Step 2: browse the results at the same machine: http://localhost:8080, http://localhost2:8080 and http://localhost3:8080
Step 3: If domains are forward to the ip address of the machine, users may browse http://www.mytcl.com:8080, http://www.mytte.com:8080 and http://www.mypmu.com:8080
Tomcat Clustering #
server.xml #
Uncommend the cluster element in $TOMCAT_HOME/conf/server.xml:
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true" notifyListenersOnReplication="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000"/> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4001" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" ackTimeout="15000" waitForAck="true"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/> </Cluster>
<distributable/> #
Make user your web.xml has the <distributable/> element or set at your <Content distributable=”true” />
Step 1: For localhost of www.mytcl.com:
Add <distributable/> between <web-app> into $TOMCAT_HOME/webapps/ROOT/WEB-INF/web.xml:
<web-app> <distributable/> <context-param> <param-name>company_id</param-name> <param-value>liferay.com</param-value> </context-param> . . . </web-app>
Step 2: Repeat step 2.2.1 to localhost2 of www.mytte.com, add <distributable/> into $TOMCAT_HOME/webapp2/ROOT/WEB-INF/web.xml
Step 3: Repeat step 2.2.1 to localhost3 of www.mypmu.com, add <distributable/> into $TOMCAT_HOME/webapp2/ROOT/WEB-INF/web.xml
Clone tomcat #
Copy all content of directory $TOMCAT_HOME to another tomcat2
In Linux:
cp -dpr tomcat tomcat2
Port numbers #
Change port numbers of tomcat2/conf/server.xml in order to make difference from the previous Tomcat:
<Server port="8006" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.AprLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
<!-- Global JNDI resources --> <GlobalNamingResources> <Environment name="simpleValue" type="java.lang.Integer" value="30"/> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Connector port="8081" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8444" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" URIEncoding="UTF-8" /> <Connector port="8010" enableLookups="false" redirectPort="8444" protocol="AJP/1.3" URIEncoding="UTF-8" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm02"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Alias>www.mytcl.com</Alias> <Alias>mytcl.com</Alias> <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true" notifyListenersOnReplication="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000"/> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4004" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" ackTimeout="15000" waitForAck="true"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/> </Cluster> </Host> </Engine> </Service> </Server>
Clustering in multiple hosting #
Add hosts for localhost2 of www.mytte.com and localhost3 of www.mypmu.com into tomcat2/conf/server.xml. Remember to make tcpListenPort difference from each other:
<Host name="localhost3" debug="0" appBase="webapp3" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Alias>www.mypmu.com</Alias> <Alias>mypmu.com</Alias> <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true" notifyListenersOnReplication="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000"/> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4006" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" ackTimeout="15000" waitForAck="true"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/> </Cluster> </Host> <Host name="localhost2" debug="0" appBase="webapp2" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Alias>www.mytte.com</Alias> <Alias>mytte.com</Alias> <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true" notifyListenersOnReplication="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000"/> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4005" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" ackTimeout="15000" waitForAck="true"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/> </Cluster> </Host>
Route in Linux #
If the cluster dosen't work under linux with two nodes at two boxes, Check the following:
1) Is your network interface enabled for multicast? ifconfig eth0 MULICAST
2) Exists a multicast route to your network interface? My settting is: route add -host 228.0.0.4 gw 192.168.10.10 dev eth0
3) Is your firewall active? Then check if that multicast port is on your UDP open list and the receiver TCP port is also open for both machines!
My fair share #
Finally, you can start both tomcat machines. You should now be able to see in their logs and that they are communicating as a cluster (example is http://192.168.10.11:8080 and http://192.168.10.12:8081)
And you may check their session replication by using the below test.jsp:
<%@ page contentType="text/html; charset=UTF-8" import="java.util.*"%> <html><head><title>Cluster App Test</title></head> <body> Server Info: <%out.print(request.getLocalAddr() + " : " + request.getLocalPort());%> <% out.println(" ID " + session.getId()); String dataName = request.getParameter("dataName"); if (dataName != null && dataName.length() > 0) { String dataValue = request.getParameter("dataValue"); session.setAttribute(dataName, dataValue); } Enumeration e = session.getAttributeNames(); while (e.hasMoreElements()) { String name = (String)e.nextElement(); String value = session.getAttribute(name).toString(); out.println( name + " = " + value); } %> <form action="test.jsp" method="POST"> Name:<input type=text size=20 name="dataName"> Value:<input type=text size=20 name="dataValue"> <input type=submit> </form> </body> </html>
Reference #
http://tomcat.apache.org/tomcat-5.5-doc/cluster-howto.html
http://tomcat.apache.org/faq/cluster.html
http://www.theserverside.com/tt/articles/article.tss?l=ClusteringTomcat
Load Balancing #
Tomcat doesn’t provide the failover capability to redirect any incoming requests to the next available server in case one of the cluster nodes goes down. Hence a load balancer takes care of workload balance and failover. There is two popular load balancers in open source market:
Solution 1: Apache + mod_jk (Reference: http://howtoforge.com/high_availability_loadbalanced_apache_cluster)
Solution 2: Pen is a load balancer for "simple" tcp based protocols such as http or smtp. It allows several servers to appear as one to the outside and automatically detects servers that are down and distributes clients among the available servers. This gives high availability and scalable performance. (Reference: http://siag.nu/pen/)
The difference between Apache+mod_jk and Pen is that installation and operation of Pen is much more simple than Apache+mod_jk.
Download #
Download website: http://siag.nu/pub/pen/pen-0.17.1.tar.gz Unzip files:
tar zxvf pen-0.17.1.tar.gz
Installation #
For Linux, type:
./configure make make install
For Windows, you have just downloaded it from http://siag.nu/pub/pen/pen-0.17.1.exe
Running #
The command to run the load balancer is as follows:
pen -f -a -d -w localhost:80 192.168.10.11:8080 192.168.10.12:8081
The following is the explanation of above command-line parameter used to launch the load balancer.
-w: Use weight algorithm for load balancing -a: Prints the data sent back and forth in ASCII format -f: Stay in foreground -d: Enable debug mode
Reference #
http://siag.nu/pen/
http://siag.nu/pen/vrrpd-linux.shtml
Database Clustering #
Sequoia is an open source database clustering middleware that allows any Java application to transparently access a cluster of databases through JDBC.
The following example will be shown Liferay and Sequoia working together, we use HSQLDB as the database sample.
Installation #
Step 1: Download sequoia-2.10.2-bin.zip from http://sequoia.continuent.org
Step 2: Unzip sequoia-2.10.2-bin.zip to your favourite directory (example: /home/brian/sequoia)
Step 3: Set up environment:
In Linux:
export SEQUOIA_HOME=/home/brian/sequoia export CLASSPATH_XTRAS=/home/brian/sequoia/3rdparty/hsqldb/lib/hsqldb.jar
In Windows:
set SEQUOIA_HOME=C:\home\brian\sequoia set CLASSPATH_XTRAS=C:\sequoia\3rdparty\hsqldb\lib\hsqldb.jar
HSQLDB schema #
Step 1: Bugfix of Sequoia for HSQLDB
Modify file: $SEQUOIA_HOME/lib/octopus/xml/conf/HypersonicSQLConf.xml
Bugfix 1: Add BOOLEAN between tag “SQLType” like below:
<SQLType> <BOOLEAN javaType="java.lang.Boolean">BOOLEAN</BOOLEAN> </SQLType>
Bugfix 2: Modify BOOLEAN between tag “JDBCType” like below:
<JDBCType>> <BOOLEAN>BOOLEAN</BOOLEAN> </JDBCType>
Step 2: OctopusDBVendors.xml
Modify file $SEQUOIA_HOME/lib/octopus/xml/conf/OctopusDBVendors.xml
Keep below databases, others may be commented.
<Vendor name="Csv">CsvConf.xml</Vendor> <Vendor name="HypersonicSQL">HypersonicSQLConf.xml</Vendor>
Step 3: Virtual databases of Liferay
mkdir $HOME/hsqldb1 cp $TOMCAT_HOME/bin/test.* cd $HOME/hsqldb1 java -cp hsqldb.jar org.hsqldb.Server -port 9001
mkdir $HOME/hsqldb2 cp $TOMCAT_HOME/bin/test.* $HOME/hsqldb2 cd $HOME/hsqldb2 java -cp hsqldb.jar org.hsqldb.Server -port 9002
Step 4: Recovery log database of Sequoia
mkdir $HOME/hsqldb3 cp $SEQUOIA_HOME/3rdparty/hsqldb/data/test.* $HOME/hsqldb3 cd $HOME/hsqldb3 java -cp hsqldb.jar org.hsqldb.Server -port 9003
Step 5: It is an option to repeat step 1 to step 3 to create virtual databases from port 9004 to 9006 similarly
Step 6: Sequoia is flexible to distribute all databases to different machines, see above diagram:
Controller 1 is assigned to 192.168.10.19
Database 1 is assigned to 192.168.10.13
Database 2 is assigned to 192.168.10.14
Recovery log 1 is assigned to 192.168.10.15
Controller 2 is assigned to 192.168.10.20
Database 3 is assigned to 192.168.10.16
Database 4 is assigned to 192.168.10.17
Recovery log 2 is assigned to 192.168.10.18
Configure controllers #
Step 1: Create controller file $SEQUOIA_HOME/config/controller/liferayController1.xml as belows:
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE SEQUOIA-CONTROLLER PUBLIC "-//Continuent//DTD SEQUOIA-CONTROLLER 2.10.2//EN" "http://sequoia.continuent.org/dtds/sequoia-controller-2.10.2.dtd"> <SEQUOIA-CONTROLLER> <Controller ipAddress="192.168.10.19" port="25322"> <JmxSettings> <RmiJmxAdaptor port="1090"/> </JmxSettings> <VirtualDatabase configFile="liferayVDB1.xml" virtualDatabaseName="lportal" autoEnableBackends="true"/> </Controller> </SEQUOIA-CONTROLLER>
Step 2: Repeat step 1 to create another controller $SEQUOIA_HOME/config/controller/liferayController2.xml as belows:
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE SEQUOIA-CONTROLLER PUBLIC "-//Continuent//DTD SEQUOIA-CONTROLLER 2.10.2//EN" "http://sequoia.continuent.org/dtds/sequoia-controller-2.10.2.dtd"> <SEQUOIA-CONTROLLER> <Controller ipAddress="192.168.10.20" port="25323"> <JmxSettings> <RmiJmxAdaptor port="1091"/> </JmxSettings> <VirtualDatabase configFile="liferayVDB2.xml" virtualDatabaseName="lportal" autoEnableBackends="true"/> </Controller> </SEQUOIA-CONTROLLER>
Configure virtual databases #
Step 1: Create virtual database file $SEQUOIA_HOME/config/virtualdatabase/liferayVDB1.xml as below:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE SEQUOIA PUBLIC "-//Continuent//DTD SEQUOIA 2.10.2//EN" "http://sequoia.continuent.org/dtds/sequoia-2.10.2.dtd"> <SEQUOIA> <VirtualDatabase name="lportal"> <Distribution> <MessageTimeouts/> </Distribution> <Backup> <Backuper backuperName="Octopus" className="org.continuent.sequoia.controller.backup.backupers.OctopusBackuper" options="zip=true"/> </Backup> <AuthenticationManager> <Admin> <User username="admin" password="adminPassword"/> </Admin> <VirtualUsers> <VirtualLogin vLogin="user" vPassword="userPassord"/> </VirtualUsers> </AuthenticationManager>
<DatabaseBackend name="database1" driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.13:9001" connectionTestStatement="call now()"> <ConnectionManager vLogin="user" rLogin="sa" rPassword=""> <VariablePoolConnectionManager initPoolSize="10" minPoolSize="5" maxPoolSize="50" idleTimeout="30" waitTimeout="10"/> </ConnectionManager> </DatabaseBackend> <DatabaseBackend name="database2" driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.14:9002" connectionTestStatement="call now()"> <ConnectionManager vLogin="user" rLogin="sa" rPassword=""> <VariablePoolConnectionManager initPoolSize="10" minPoolSize="5" maxPoolSize="50" idleTimeout="30" waitTimeout="10"/> </ConnectionManager> </DatabaseBackend> <RequestManager> <RequestScheduler> <RAIDb-1Scheduler level="passThrough"/> </RequestScheduler> <LoadBalancer> <RAIDb-1> <WaitForCompletion policy="first"/> <RAIDb-1-LeastPendingRequestsFirst/> </RAIDb-1> </LoadBalancer> <RecoveryLog driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.15:9003" login="sa" password=""> <RecoveryLogTable tableName="RECOVERY" logIdColumnType="BIGINT NOT NULL" vloginColumnType="VARCHAR NOT NULL" sqlColumnType="VARCHAR NOT NULL" extraStatementDefinition=",PRIMARY KEY (log_id)"/> <CheckpointTable tableName="CHECKPOINT" checkpointNameColumnType="VARCHAR NOT NULL"/> <BackendTable tableName="BACKEND" databaseNameColumnType="VARCHAR NOT NULL" backendNameColumnType="VARCHAR NOT NULL" checkpointNameColumnType="VARCHAR NOT NULL"/> <DumpTable tableName="DUMP" dumpNameColumnType="VARCHAR NOT NULL" dumpDateColumnType="TIMESTAMP" dumpPathColumnType="VARCHAR NOT NULL" dumpFormatColumnType="VARCHAR NOT NULL" checkpointNameColumnType="VARCHAR NOT NULL" backendNameColumnType="VARCHAR NOT NULL" tablesColumnType="VARCHAR NOT NULL"/> </RecoveryLog> </RequestManager> </VirtualDatabase> </SEQUOIA>
Step 2: Repeat step 1 to create another virtual database file $SEQUOIA_HOME/config/virtualdatabase/liferayVDB2.xml as below:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE SEQUOIA PUBLIC "-//Continuent//DTD SEQUOIA 2.10.2//EN" "http://sequoia.continuent.org/dtds/sequoia-2.10.2.dtd"> <SEQUOIA> <VirtualDatabase name="lportal"> <Distribution> <MessageTimeouts/> </Distribution> <Backup> <Backuper backuperName="Octopus" className="org.continuent.sequoia.controller.backup.backupers.OctopusBackuper" options="zip=true"/> </Backup> <AuthenticationManager> <Admin> <User username="admin" password="adminPassword"/> </Admin> <VirtualUsers> <VirtualLogin vLogin="user" vPassword="userPassord"/> </VirtualUsers> </AuthenticationManager>
<DatabaseBackend name="database3" driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.13:9004" connectionTestStatement="call now()"> <ConnectionManager vLogin="user" rLogin="sa" rPassword=""> <VariablePoolConnectionManager initPoolSize="10" minPoolSize="5" maxPoolSize="50" idleTimeout="30" waitTimeout="10"/> </ConnectionManager> </DatabaseBackend> <DatabaseBackend name="database4" driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.14:9005" connectionTestStatement="call now()"> <ConnectionManager vLogin="user" rLogin="sa" rPassword=""> <VariablePoolConnectionManager initPoolSize="10" minPoolSize="5" maxPoolSize="50" idleTimeout="30" waitTimeout="10"/> </ConnectionManager> </DatabaseBackend> <RequestManager> <RequestScheduler> <RAIDb-1Scheduler level="passThrough"/> </RequestScheduler> <LoadBalancer> <RAIDb-1> <WaitForCompletion policy="first"/> <RAIDb-1-LeastPendingRequestsFirst/> </RAIDb-1> </LoadBalancer> <RecoveryLog driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://192.168.10.15:9006" login="sa" password=""> <RecoveryLogTable tableName="RECOVERY" logIdColumnType="BIGINT NOT NULL" vloginColumnType="VARCHAR NOT NULL" sqlColumnType="VARCHAR NOT NULL" extraStatementDefinition=",PRIMARY KEY (log_id)"/> <CheckpointTable tableName="CHECKPOINT" checkpointNameColumnType="VARCHAR NOT NULL"/> <BackendTable tableName="BACKEND" databaseNameColumnType="VARCHAR NOT NULL" backendNameColumnType="VARCHAR NOT NULL" checkpointNameColumnType="VARCHAR NOT NULL"/> <DumpTable tableName="DUMP" dumpNameColumnType="VARCHAR NOT NULL" dumpDateColumnType="TIMESTAMP" dumpPathColumnType="VARCHAR NOT NULL" dumpFormatColumnType="VARCHAR NOT NULL" checkpointNameColumnType="VARCHAR NOT NULL" backendNameColumnType="VARCHAR NOT NULL" tablesColumnType="VARCHAR NOT NULL"/> </RecoveryLog> </RequestManager> </VirtualDatabase> </SEQUOIA>
Start Controller #
Step 1: $SEQUOIA_HOME/bin/controller.sh -f ../config/controller/liferayController1.xml
Step 2: $SEQUOIA_HOME/bin/controller.sh -f ../config/controller/liferayController2.xml
Step 3: Thie step is required to ininitialize virtual databases at the first time only
In Linux: $SEQUOIA_HOME/bin/console.sh
In Windows: %SEQUOIA_HOME%/bin/console.bat
type:
admin lportal admin adminPassword expert on initialize database1 backup database1 backup1 Octopus . sa enable database1 restore backend database2 backup1 sa enable database2 restore backend database3 backup1 sa enable database3 restore backend database4 backup1 sa enable database4 show backend * quit quit
Liferay configuration #
Step 1: JDBC Driver
copy $SEQUOIA_HOME/drivers/sequoia-driver.jar to $TOMCAT_HOME/common/lib/ext
Step 2: ROOT.xml
Connect Sequoia virtual databases, clustering jdbc is scalarable to cluster one to many database connection:
Virtual database 1: 192.168.10.19 Virtual database 2: 192.168.10.20
Modify file $TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml
<Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource" driverClassName="org.continuent.sequoia.driver.Driver" url="jdbc:sequoia://192.168.10.19:25322,192.168.10.20:25323/lportal" username="user" password="userPassword" maxActive="20" />
Step 3: portal-ext.properties
Add below statement to $TOMCAT_HOME/webapps/ROOT/WEB-INF/classes/portal-ext.properties
hibernate.dialect=org.hibernate.dialect.HSQLDialect
Start Tomcat #
In Linux, start Liferay at $TOMCAT_HOME/bin/startup.sh
In Windows, %TOMCAT_HOME%/bin/startup.bat
Shutdown procedure #
Step 1: Shutdown Liferay:
In Linux, $TOMCAT_HOME/bin/shutdown.sh
In Windows, %TOMCAT_HOME%/bin/shutdown.bat
Step 2: Shutdown Sequoia:
In Linux, $SEQUOIA_HOME/bin/console.sh
In Windows, %SEQUOIA_HOME%/bin/console.bat
type:
shutdown virtualdatabase lportal shutdown quit
Reference #
http://sequoia.continuent.org/HomePage
http://www.onlamp.com/pub/wlg/7963
Dynamic Virtual Hosting #
Liferay provides a new dynamic virtual hosting model starting with version 4.2. New dynamic virtual hosting will let you to add virtual hosts without restarting tomcat. All hosts are served by the same webapps.
No comments:
Post a Comment