Sort by: Newest, Oldest, Most Relevant
@prologic (#tgmogmq) this is to say, FEC & data duplication solve different problems. Disaster Recovery is possible with data duplication, but a bug in your FEC code and no duplication will prevent it. Use case matters because is your def of HA going to be 99.99% or 99.999%? The first can be single-instance mysql. The second, a cluster. More, and you start stepping into things like TiDB or Spanner.

matched #ssmj7da score:12.39 Search by:
Search by 1 mentions:
Search by 1 tags:
@prologic (#tgmogmq) my read on reed-solomon erasure encoding is its the generalized implementation of raid 5, that replication factor is about the data stored's complete replica count ignoring any capabilities of recovery through FEC. If you're trusting your FEC, then you'll be happy with replica-counts of 1. HA, of course, is dependent on use case. CAP teaches us to sacrifice Availability, but you can make very HA systems and keep C & P. Block size? I assume that's about compression's effectiveness - you'll get better compression with more data.

matched #yjyksqa score:12.39 Search by:
Search by 1 mentions:
Search by 1 tags:
This is twtxt search engine and crawler. Please contact Support if you have any questions, concerns or feedback!