Skip to content

Commit

Permalink
Merge pull request #120 from zalando-incubator/develop
Browse files Browse the repository at this point in the history
Merge #118 Release 2.0.2
  • Loading branch information
kanuku authored Oct 28, 2016
2 parents e4538ba + d177875 commit b941e42
Show file tree
Hide file tree
Showing 144 changed files with 8,952 additions and 3,116 deletions.
46 changes: 42 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,42 @@
/target/
/project/*/
/.idea/
*.iml
*.class
*.log

# sbt specific
.cache
.history
.lib/
dist/*
**target/
lib_managed/
src_managed/
project/boot/
project/plugins/project/

# Scala-IDE specific
.scala_dependencies
.worksheet

logs
project/project
project/target
target
tmp
.history
dist
/.idea
/*.iml
/out
/.idea_modules
**/.classpath
**/.project
/RUNNING_PID
**/.settings
**/.cache-main
**/.cache-tests
**/bin
.DS_Store

#misc
aws_host.txt
scm-source.json
*.notes
3 changes: 3 additions & 0 deletions .scalafmt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Example .scalafmt, comments are supported.
--style defaultWithAlign # For pretty alignment.
--maxColumn 120 # For my wide 30" display.
44 changes: 44 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
language: scala
sudo: required

jdk:
- oraclejdk8

scala:
- 2.11.7

services:
- docker

# fix for buffer overflow https://github.com/travis-ci/travis-ci/issues/5227
addons:
hostname: localhost

env:
matrix:
- DOCKER_IP=127.0.0.1

global:
- secure: pucnGYKmiGHmjIx7yf3HUhsvzjAJwE2vhvFcOvPYwy8PoIG51nvDSpgv5criK5RCaE2/t06hZjvjvRQ04FA/vrgex+NxBJNgEZ33Lmjdx2ncYCnvVDmWY43cJva3fMznFny0wifPvs9e73Yp75tTiZctCY0iiMDhyd9tCHOhRYs2Dwr1Sg5QBF616rgZRiY9iYWyo0TR1IEB8diBvTX5WFWJQtMcAF2U3BhP8SS44ds7W5Abz/vWcQJI27lnqUQstfEGkZiWMTLEGakmhG4NApBEwPT8tFIEJnF8UmHF/pO1hNM22Ge7kaVAN/X2Q/wvXdmsB5bAMVFWpgI52ZfMNQwcQc0gXHA7+aQ+Hz5Fe0tbyWOxldx3Vr8ZZLUl1Ag0yFfaY/BKIcrDKgrpSmYeCaN5QyFDgv4itUTNL03qtYg3PZDS8lljI2+WOZQ86EpN+tFz2DObBB7bNMVMxSibM1+Tdy46Rft8h17KX/jla9StAYY78DlwRlAQbVBM+eGybNtQSp7QYAyOpSPdcGyQUOweBP+Ht57wPE2VNuJToYn00t+2XCyMLU9Bx5UTJx+FwhaLfkMW+3S8cMRgbNXWzz9SA/rSNsbc5AwfO2OrLnFqSgo1OAK3K/GBXIiFnlBgrMAvk6PH05aHf0HExBXIR+X1x2mT63zN9M1gL7vt9h4=
- secure: RcTCf1WRsZfw3lEuUE+WS96cUZ0kuYxdgFy0CVrCB7oR7GZd6jxupJgJsbpG/29XlAWyhwIthmWof/tU/3lS+OKFKKXw7qjpc/nUQwKwC6RvDnHt1w+4GbUh1NiNcuIlMMnquWFlAmQ9ubs86nIfez0WP+RVWy4KK8i9H1R1wGdw5DD+dlZ5Ckm0qSaBPrpBC7ML/17/JD5T9UDPAB5G3qYJ6w9tmvMLx9xEsGG7b79plIu9xGpVWVZqoidOQCN9fb1WmmdGkC0EwWb+b7cZ1JgTB+/wdOyEJpaabKVj87jS/B1kAZWw5LhMXQPcee4TxKlIBBbIme3aoxrqqJ+4qz8GCuvgRdqi9DyJL3i3EVfoDeje4gjzka9IetV8cJ4n1tYnEdt6Ft2zwMlTa6SggJR4dc142IuG7W/nmlHbLOnZuy8XYgk2GHlnUcEpOI3FQ7ln59EPPKBKla+YO8BiNGTum4DHHaMcvhv5976Ho3Z8d7+vgT0IpJChvBBuGOrcjFHTStABFt9S78xqjXmR2aOXe9SNYUi4bQX4DWHAb+3sDjofVb+xnRCCkrxZ4+YUgmj2pJhrAnDTsgFCvDwl4UaqAyhcjdSQukTIg0+O15/Oj/lSMpRarFgVezyootP6wzoVGR/lc/B0w243xTW4NsX5nKb4AgMBpsDYB3YCDug=

before_install:
- sudo sh -c 'echo "deb https://apt.dockerproject.org/repo ubuntu-precise main" > /etc/apt/sources.list.d/docker.list'
- sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- sudo apt-get update
- sudo apt-key update
- sudo apt-get -qqy -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install docker-engine=1.11.1-0~precise
- sudo rm -f /usr/local/bin/docker-compose
- curl --verbos -L https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin

before_script:
- sudo /etc/init.d/postgresql stop
- bash ./it/src/main/resources/nakadi.sh start

script:
- sbt clean test

after_script:
- bash ./it/src/main/resources/nakadi.sh stop
13 changes: 13 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Contributing

## Pull requests only

**DON'T** push to the master branch directly. Always use feature branches and let people discuss changes in pull requests.
Pull requests should only be merged after all discussions have been concluded and at least 1 reviewer has given their
**approval**.

## Guidelines

- **every change** needs a test
- 100% code coverage
- keep the current code style
192 changes: 7 additions & 185 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,198 +1,20 @@

## VERY IMPORTANT NOTE!
There are 2 versions of Nakadi-klients:
* `version: 1.**` - Is based on **an old version of nakadi API** which never was/became production ready. **This version is discontinued and not supported anymore!!**
* `version: 2.**` - Based on the new Nakadi API, which **is still under development** and not production ready.

Nakadi Klients
==============

Implementation of a non blocking client accessing the low level API of the [Nakadi event bus](https://github.com/zalando/nakadi). Internally, it uses [Akka](http://akka.io/) and [Akka Http](http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.2/scala/http/) to implement its communication tasks.

Please note that the client provides a Scala as well as a Java interface.

## Prerequisites
- Java >= 1.8
- Scala >= 2.11



## Tutorial

### Configuration

```
nakadi.client {
noListenerReconnectDelay = 10 seconds // if no listener could be found, no connection to Nakadi is established.
// noListenerReconnectDelay specifies the delay after which the actor
// should try again to connect
pollParallelism = 100 // number of parallel polls from a specific host
receiveBufferSize = 1024 bytes // initial buffer size for event retrieval
defaultBatchFlushTimeout = 5 seconds // default batch flush timeout set in ListenParameters
defaultBatchLimit = 1 // default batch limit set in ListenParameters
defaultStreamLimit = 0 // default stream limit set in ListenParameters
supervisor {
// note: Supervisor strategy parameter names are from the Akka - keep them like this
maxNrOfRetries = 100
withinTimeRange = 5 minutes
resolveActorTimeout = 1 second // timeout for resolving PartitionReceiver actor reference
}
}
```


### Instantiate Client

`Scala`
```scala
val klient = KlientBuilder()
.withEndpoint(new URI("localhost"))
.withPort(8080)
.withSecuredConnection(false)
.withTokenProvider(() => "<my token>")
.build()
```

`Java`
```Java
final Client client = new KlientBuilder()
.withEndpoint(new URI("localhost"))
.withPort(8080)
.withSecuredConnection(false)
.withJavaTokenProvider(() -> "<my token>")
.buildJavaClient();
```

### Get Monitoring Metrics
NOTE: metrics format is not defined / fixed

`Scala`
```scala
klient.getMetrics map ( _ match {
case Left(error) => fail(s"could not retrieve metrics: $error")
case Right(metrics: Map[String, Any]) => logger.debug(s"metrics => $metrics")
})
```

`Java`
```java
Future<Map<String, Object>> metrics = client.getMetrics();
```

### List All Known Topics

`Scala`
```scala
klient.getTopics map ( _ match {
case Left(error) => fail(s"could not retrieve topics: $error")
case Right(topics: List[Topic]) => logger.debug(s"topics => $topics")
})
```

`Java`
```java
Fuure<List<Topic>> topics = klient.getTopics();
```

### Post a Single Event to the Given Topic
Partition selection is done using the defined partition resolution. The partition resolution strategy is defined per
topic and is managed by the event bus (currently resolved from a hash over `ordering_key`).

`Scala`
```scala
val event = Event("eventType",
"orderingKey",
Map("id" -> "1234567890"),
Map("greeting" -> "hello",
"target" -> "world"))

klient.postEvent(topic, event) map (_ match {
case Left(error) => fail(s"an error occurred while posting event to topic $topic")
case Right(_) => logger.debug("event post request was successful")
})
```

`Java`
```java
HashMap<String,Object> meta = Maps.newHashMap();
meta.put("id", "1234567890");

HashMap<String, String> body = Maps.newHashMap();
body.put("greeting", "hello");
body.put("target", "world");

Future<Void> f = client.postEvent("test", new Event("eventType", "orderingKey", meta, body);
```


### Get Partition Information of a Given Topic

`Scala`
```scala
klient.getPartitions("topic") map (_ match {
case Left(error: String) => fail(s"could not retrieve partitions: $error")
case Right(partitions: List[TopicPartition]) => partitions
})
```

`Java`
```java
Future<List<TopicPartition>> topics = klient.getPartitions("topic");
```

### Subscribe to a given topic
Non-blocking subscription to a topic requires a `Listener` implementation. The event listener does not have to be
thread-safe because each listener is handled by its own `Akka` actor

`Scala`
```scala
klient.subscribeToTopic("topic", ListenParameters(), listener, autoReconnect = true) // autoReconnect default is true
```

`Java`
```java
public final class MyListener implements JListener {...}

client.subscribeToTopic("topic",
ListenParametersUtils.defaultInstance(),
new JListenerWrapper(new MyListener()),
true);

```

### Subscribe to a Specific Partition
Non blocking subscription to events of specified topic and partition.

`Scala`
```scala
klient.listenForEvents("topic",
"partitionId",
ListenParameters(),
listener,
autoReconnect = false) // default is false
```

`Java`
```java
public final class MyListener implements JListener {...}
Please note that the client provides a Scala as well as a Java interface.

client.listenForEvent("topic",
"partitionId",
ListenParametersUtils.defaultInstance(),
new JListenerWrapper(new MyListener()));
```
## Note

* The new client has pre-released versions that can be found [here](https://github.com/zalando/nakadi-klients/releases).
* The new client documentation can be found in the [wiki](https://github.com/zalando/nakadi-klients/wiki).

## See
- [Nakadi event bus](https://github.com/zalando/nakadi)
- [STUPS](https://github.com/zalando-stups)
- [STUPS' tokens library](https://github.com/zalando-stups/tokens)

## TODO
- [ ] handle case where separate clusters consisting of 1 member are built
## Prerequisites
- Java >= 1.8
- Scala >= 2.11

## License
http://opensource.org/licenses/MIT
Loading

0 comments on commit b941e42

Please sign in to comment.