Programming your body with chip implants
Pär Sikö
chip: 12mm / 2mm -- same as for pets- rfid: entrance system
- nfc: smartcards: 1kb
- battery -- energy harvesting
- communication technology
- active
Optional - The Mother of All Bikesheds
Stuart Marks
Optional- java 8 (java.util)
- non-null ref (present) or empty (absent)
- primitives: OptionalInt, Long etc
- never use "null" as ref in Optional
- "limited mech for returntypes where null will very likely return errors" e.g. streams api: Optional prevents NPE's in chained calls
- issue NoSuchElementException
- never call "Optional.get()" when you can prove that the Optional is present
- prefer alternatives to Optional.isPresent() / .get()
- use: orElse() / orElseGet() / orElseThrow()
- Optional.filter() predicate
- Optional.ifPresent(): (<> isPresent) executes lambda if present
- other methods
- empty()
- of()
- flatMap()
- ...
- stream of Optional: .filter(Optional::isPresent).map(Optional::get).collect() -- filters present Optionals & extract values
- simple nullchecks - avoid Optional.isNullable. chainsgh
- too complex constucts: Optional chains should be avoided
- Optional.get() "attractive nuisance" -- will be deprecated
- do not use Optional for
- fields
- method parameters
- collections
- replacing every null
- Optional adds extra objects -- check performance issues
- no identity-sensitive operations (e.g. serialization)
A Crash Course in Modern Hardware
Cliff Click
- classic Von Neumann Architecture
- throughput / core +10% / year (single-threaded)
- CISC: easier to program, but harder to optimize (pagefaults)
- RISC: simpler, but faster execution
- walls:
- power wall
- ILP wall (branch prediction, speculative execution)
- pipelining
- better throughput, but latency remains
- cache misses: stall -- performance = cache misses
- branch predictions: 95% success
- Itanium: static ILP: not much gain for huge effort
- x86: limited by cache misses / branch mispredicts
- locality is critical
- pipelining
- memory wall
- memory is larger, but latency is still high (DRAM)
- SRAM for caches
- requires data locality
- cache layers
- "memory is the new disk"
- faster memory
- relax coherency constraints
- better throughput
- speed of light
- flat clock rates (15y)
- hyper-threading: same limits (cache misses)
- more cores
- challenges:
- chips reorder
- concurrency is hard
- immutable data
- missing toolsets
- challenges:
The ISS position in real time on my mobile in less than 15mn ? Yes, we can.
Audrey Neveu
- api.open-notify.org
- ionic + cordova
- server-sent events: push technology: text-only
- streamdata.io for streaming the server-sent-events
- JSON-patch RFC-6902 for changes demo
- ionic start iss.io maps (=template)
- ionic serve --lab
- bower.json --> bower install
graph databases and the "panama papers"
Stefan Armbruster
panama papers: 2,6 TB data- 3 million files
- tools
- Nuix OCR
- icij extract for metadata extraction: https://github.com/ICIJ/extract
- blacklight (opensource collaboration tool: http://projectblacklight.org/)
- db
- solr
- redis
- neo4j https://neo4j.com/
- linkurious visualisation http://linkurio.us/ (commercial)
- nodes: entities (can have name/value properties
- relationships: type + direction (=semantic)
- internal
- network / it operations
- data management
- customer facing
- real-time recommendations
- graph based search
- identity/access management
- graph database -- easy to draw structure
- solves relational pains (logical vs table model)
- open source
- easy to use
- ACID
- scalable (3.1)
- syntax
- patterns: (:Person{name:"Dan"})-:KNOWS>(:Person{name:"an"})
- clauses CREATE / MERGE / SET/DELETE..
- MATCH
> WHERE <> - ORDER BY <>
- paginationSKIP / LIMIT
- LOAD CSV
- demo
A JVM does That?
Cliff Click
services -- "Virtual"
- high quality GC
- high quality machine code gen
- uniform threading / memory model
- type safety
- ...
- infinite mem -- gc pauses
- jvm optimizes
- byte code is fast:
- JIT brings back expected cost model (gcc -O2 level)
- JIT requires profiling
- virtual calls are slow: java makes them fast
- inline caches
- partial programs are fast: requires deoptimization, reprofile, reJIT
- consisten memory model: every machine has different memory models -- JVM handles this
- consistent thread model: JVM imporves locking etc
- Locks are fast
- quick time access: difficult on hardware / multiple threads *
- gettimeofday in java
- tail calls
- Integer as cheap as int
- BigInteger as cheap as int
- atomic multi-address update (software transactional memory)
- thread priorities: on linux -- only as root
- finalizers: "eventually" runs -- might be never (no timeliness guarantees)
- soft/phantom refs: difficult to maintain in GC
No comments:
Post a Comment