The spring-cloud-data project provides orchestration for data microservices, including spring-cloud-stream modules. The spring-cloud-data domain model includes the concept of a stream that is a composition of spring-cloud-stream modules in a linear pipeline from a source to a sink.
The Module Registry maintains the set of available modules, and their mappings to Maven coordinates.
The Module Deployer SPI provides the abstraction layer for deploying the modules of a given stream across a variety of runtime environments, including:
The Admin provides a REST API and UI. It is an executable Spring Boot application that is profile aware, so that the proper implementation of the Module Deployer SPI will be instantiated based on the environment within which the Admin application itself is running.
The Shell connects to the Admin's REST API and supports a DSL that simplifies the process of defining a stream and managing its lifecycle.
The instructions below describe the process of running both the Admin and the Shell across different runtime environments.
1. start Redis locally via redis-server
2. clone this repository and build from the root directory:
git clone https://github.com/spring-cloud/spring-cloud-data.git
cd spring-cloud-data
mvn clean package
3. launch the admin:
$ java -jar spring-cloud-data-admin/target/spring-cloud-data-admin-1.0.0.BUILD-SNAPSHOT.jar
4. launch the shell:
$ java -jar spring-cloud-data-shell/target/spring-cloud-data-shell-1.0.0.BUILD-SNAPSHOT.jar
thus far, only the following commands are supported in the shell when running singlenode:
stream list
stream create
stream deploy
1. start Redis on Lattice (running as root):
ltc create redis redis -r
2. launch the admin, with a mapping for port 9393 and extra memory (the default is 128MB):
ltc create admin springcloud/data-admin -p 9393 -m 512
3. launching the shell is the same as above, but once running must be configured to point to the admin that is running on Lattice:
server-unknown:>admin config server http://admin.192.168.11.11.xip.io
Successfully targeted http://admin.192.168.11.11.xip.io
cloud-data:>
all stream commands are supported in the shell when running on Lattice:
stream list
stream create
stream deploy
stream undeploy
stream all undeploy
stream destroy
stream all destroy
work in progress, stay tuned!
Current YARN configuration is set to use localhost meaning this can only be run against local cluster. Also all commands needs to be run from a project root.
1. build packages
$ mvn clean package
2. start Redis locally via redis-server
3. optionally wipe existing data on hdfs
$ hdfs dfs -rm -R /app/app
4. start cli app, push and submit app to yarn
$ java -jar spring-cloud-data-yarn/spring-cloud-data-yarn-client/target/spring-cloud-data-yarn-client-1.0.0.BUILD-SNAPSHOT.jar shell
Spring YARN Cli (v2.3.0.M1)
Hit TAB to complete. Type 'help' and hit RETURN for help, and 'exit' to quit.
$ push
New version installed
$ submit
New instance submitted with id application_1439285616431_0010
$ submitted
APPLICATION ID USER NAME QUEUE TYPE STARTTIME FINISHTIME STATE FINALSTATUS ORIGINAL TRACKING URL
------------------------------ ------------ -------------------------- ------- ---- -------------- ---------- ------- ----------- --------------------------
application_1439285616431_0010 jvalkealahti spring-cloud-data-yarn-app default XD 12/08/15 08:32 N/A RUNNING UNDEFINED http://192.168.122.1:51656
5. start spring-cloud-data-rest
with yarn
profile
$ java -Dspring.profiles.active=yarn -jar spring-cloud-data-rest/target/spring-cloud-data-rest-1.0.0.BUILD-SNAPSHOT.jar
6. start spring-cloud-data-shell
java -jar spring-cloud-data-shell/target/spring-cloud-data-shell-1.0.0.BUILD-SNAPSHOT.jar
cloud-data:>stream create --name ticktock --definition "time|log" --deploy
Created and deployed new stream 'ticktock'
cloud-data:>stream list
Stream Name Stream Definition Status
----------- ----------------- --------
ticktock time|log deployed
cloud-data:>stream destroy --name ticktock
Destroyed stream 'ticktock'