Sort by: Newest, Oldest, Most Relevant
(#nsvyaxq) the one we need is the one that resolves sticky communications issues between arbitrary deployment technology choices with completely skew practices. that is to say, take beanshells for the worst example, you suddenly need: 0) declarations to point to arbitrary executables that give 0/1 responses 1) endpoint to check the live deployment status 2) endpoint to flush current configs to disk 3) audits all config changes 4) endpoints to load configs from a spot on disk or from the expected config file 5) source control history of config values and changes, like a ChainMap

matched #hswjk6a score:11.2 Search by:
Search by 1 tags:
@prologic (#tgmogmq) this is to say, FEC & data duplication solve different problems. Disaster Recovery is possible with data duplication, but a bug in your FEC code and no duplication will prevent it. Use case matters because is your def of HA going to be 99.99% or 99.999%? The first can be single-instance mysql. The second, a cluster. More, and you start stepping into things like TiDB or Spanner.

matched #ssmj7da score:11.2 Search by:
Search by 1 mentions:
Search by 1 tags:
@prologic (#tgmogmq) my read on reed-solomon erasure encoding is its the generalized implementation of raid 5, that replication factor is about the data stored's complete replica count ignoring any capabilities of recovery through FEC. If you're trusting your FEC, then you'll be happy with replica-counts of 1. HA, of course, is dependent on use case. CAP teaches us to sacrifice Availability, but you can make very HA systems and keep C & P. Block size? I assume that's about compression's effectiveness - you'll get better compression with more data.

matched #yjyksqa score:11.2 Search by:
Search by 1 mentions:
Search by 1 tags:
This is twtxt search engine and crawler. Please contact Support if you have any questions, concerns or feedback!