Skip to content

Commit

Permalink
Add in missing card from home page for job orchestration, take out re…
Browse files Browse the repository at this point in the history
…ference to data tech
  • Loading branch information
pflooky committed Dec 28, 2023
1 parent 503881d commit b01a9b3
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 6 deletions.
10 changes: 8 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Data Tech Compare
# Tech Compare

Compare all data related technologies with each other to find the best fit for you and your use case.
Compare technologies with each other to find the best fit for you and your use case.

## Categories

Expand All @@ -12,6 +12,12 @@ Compare all data related technologies with each other to find the best fit for y

CSV, Parquet, ORC, JSON, Avro, etc.

- :octicons-tasklist-16:{ .lg .middle } __[Job orchestration](job-orchestration/index.md)__

---

Apache Airflow, Dagster, Prefect, Mage, etc.

</div>


11 changes: 8 additions & 3 deletions site/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@
<div data-md-component="skip">


<a href="#data-tech-compare" class="md-skip">
<a href="#tech-compare" class="md-skip">
Skip to content
</a>

Expand Down Expand Up @@ -489,8 +489,8 @@



<h1 id="data-tech-compare">Data Tech Compare</h1>
<p>Compare all data related technologies with each other to find the best fit for you and your use case.</p>
<h1 id="tech-compare">Tech Compare</h1>
<p>Compare technologies with each other to find the best fit for you and your use case.</p>
<h2 id="categories">Categories</h2>
<div class="grid cards">
<ul>
Expand All @@ -499,6 +499,11 @@ <h2 id="categories">Categories</h2>
<hr />
<p>CSV, Parquet, ORC, JSON, Avro, etc.</p>
</li>
<li>
<p><span class="twemoji lg middle"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><path d="M2 2h4a1 1 0 0 1 1 1v4a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1Zm4.655 8.595a.75.75 0 0 1 0 1.06L4.03 14.28a.75.75 0 0 1-1.06 0l-1.5-1.5a.749.749 0 0 1 .326-1.275.749.749 0 0 1 .734.215l.97.97 2.095-2.095a.75.75 0 0 1 1.06 0ZM9.75 2.5h5.5a.75.75 0 0 1 0 1.5h-5.5a.75.75 0 0 1 0-1.5Zm0 5h5.5a.75.75 0 0 1 0 1.5h-5.5a.75.75 0 0 1 0-1.5Zm0 5h5.5a.75.75 0 0 1 0 1.5h-5.5a.75.75 0 0 1 0-1.5Zm-7.25-9v3h3v-3Z"/></svg></span> <strong><a href="job-orchestration/">Job orchestration</a></strong></p>
<hr />
<p>Apache Airflow, Dagster, Prefect, Mage, etc.</p>
</li>
</ul>
</div>

Expand Down
2 changes: 1 addition & 1 deletion site/search/search_index.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Data Tech Compare","text":"<p>Compare all data related technologies with each other to find the best fit for you and your use case.</p>"},{"location":"#categories","title":"Categories","text":"<ul> <li> <p> Files</p> <p>CSV, Parquet, ORC, JSON, Avro, etc.</p> </li> </ul>"},{"location":"database/","title":"Databases","text":""},{"location":"file/","title":"File","text":"<p> Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake</p> Attribute Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake Name Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake Description Apache Avro is the leading serialization format for record data, and first choice for streaming data pipelines. Apache Hudi is a transactional data lake platform that brings database and data warehouse capabilities to the data lake. Utilises data stored in either parquet or orc. Iceberg is a high-performance format for huge analytic tables. Utilises data stored in either parquet, avro, or orc. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. Comma-Separated Values (CSV) is a text file format that uses commas to separate values in plain text. Delta Lake is an open-source storage framework that enables building a Lakehouse architecture. License Apache license 2.0 Apache license 2.0 Apache license 2.0 Apache license 2.0 Apache license 2.0 N/A Apache license 2.0 Source code https://github.com/apache/avro https://github.com/apache/hudi https://github.com/apache/iceberg https://github.com/apache/orc https://github.com/apache/parquet-format https://github.com/delta-io/delta Website https://avro.apache.org/ https://hudi.apache.org/ https://iceberg.apache.org/ https://orc.apache.org/ https://parquet.apache.org/ https://www.rfc-editor.org/rfc/rfc4180.html https://delta.io/ Year created 2009 2016 2017 2013 2013 0 2019 Company Apache Uber Netflix Hortonworks, Facebook Twitter, Cloudera Databricks Language support java, c++, c#, c, python, javascript, perl, ruby, php, rust java, scala, c++, python java, scala, c++, python, r, php java, scala, c++, python, r, php, go scala, java, python, rust Use cases Stream processing, Analytics, Efficient data exchange Incremental data processing, Data upserts, Change Data Capture (CDC), ACID transactions Write once read many, Analytics, Efficient storage, ACID transactions Write once read many, Analytics, Efficient storage, ACID transactions Write once read many, Analytics, Efficient storage, Column based queries Write once read many, Analytics, Efficient storage, ACID transactions Is human readable no no no no no yes no Orientation row column or row column or row row column row column Has type system yes yes yes yes yes no yes Has nested structure support yes yes yes yes yes no yes Has native compression yes yes yes yes yes no yes Has encoding support yes yes yes yes yes no yes Has constraint support no yes no no no no yes Has acid support no yes yes no no no yes Has metadata yes yes yes yes yes no yes Has encryption support no maybe maybe yes yes no maybe Data processing framework support Apache Flink, Apache Gobblin, Apache NiFi, Apache Pig, Apache Spark, Apache Spark, Apache Flink, Apache Drill, Apache Flink, Apache Gobblin, Apache Pig, Apache Spark, Apache Flink, Apache Gobblin, Apache Hadoop, Apache NiFi, Apache Pig, Apache Spark, Apache Beam, Apache Drill, Apache Flink, Apache Spark, Apache Beam, Apache Drill, Apache Flink, Apache Gobblin, Apache Hive, Apache NiFi, Apache Pig, Apache Spark, Apache Drill, Apache Flink, Apache Spark, Analytics query support Apache Impala, Apache Druid, Apache Hive, Apache Pinot, AWS Athena, BigQuery, Clickhouse, Firebolt, Apache Hive, Apache Impala, AWS Athena, BigQuery, Clickhouse, Presto, Trino, Apache Impala, Apache Druid, Apache Hive, AWS Athena, BigQuery, Clickhouse, Dremio, DuckDB, Presto, Trino, Apache Impala, Apache Druid, Apache Hive, Apache Pinot, AWS Athena, BigQuery, Clickhouse, Firebolt, Presto, Trino, Apache Hive, Apache Impala, Apache Druid, Apache Pinot, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, DuckDB, Firebolt, Apache Impala, Apache Druid, Apache Pinot, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, DuckDB, Firebolt, Apache Hive, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, Presto, Trino,"},{"location":"job-orchestration/","title":"Job orchestration","text":"<p> Apache Airflow</p> Attribute Apache Airflow Name Apache Airflow Description Apache Airflow is a platform to programmatically author, schedule, and monitor workflows. License Apache license 2.0 Source code https://github.com/apache/airflow Website https://airflow.apache.org/ Year created 2014 Company Airbnb, Apache Language support python Use cases Workflow scheduling Job orchestration N/A"}]}
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Tech Compare","text":"<p>Compare technologies with each other to find the best fit for you and your use case.</p>"},{"location":"#categories","title":"Categories","text":"<ul> <li> <p> Files</p> <p>CSV, Parquet, ORC, JSON, Avro, etc.</p> </li> <li> <p> Job orchestration</p> <p>Apache Airflow, Dagster, Prefect, Mage, etc.</p> </li> </ul>"},{"location":"database/","title":"Databases","text":""},{"location":"file/","title":"File","text":"<p> Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake</p> Attribute Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake Name Apache Avro Apache Hudi Apache Iceberg Apache ORC Apache Parquet CSV Delta Lake Description Apache Avro is the leading serialization format for record data, and first choice for streaming data pipelines. Apache Hudi is a transactional data lake platform that brings database and data warehouse capabilities to the data lake. Utilises data stored in either parquet or orc. Iceberg is a high-performance format for huge analytic tables. Utilises data stored in either parquet, avro, or orc. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. Comma-Separated Values (CSV) is a text file format that uses commas to separate values in plain text. Delta Lake is an open-source storage framework that enables building a Lakehouse architecture. License Apache license 2.0 Apache license 2.0 Apache license 2.0 Apache license 2.0 Apache license 2.0 N/A Apache license 2.0 Source code https://github.com/apache/avro https://github.com/apache/hudi https://github.com/apache/iceberg https://github.com/apache/orc https://github.com/apache/parquet-format https://github.com/delta-io/delta Website https://avro.apache.org/ https://hudi.apache.org/ https://iceberg.apache.org/ https://orc.apache.org/ https://parquet.apache.org/ https://www.rfc-editor.org/rfc/rfc4180.html https://delta.io/ Year created 2009 2016 2017 2013 2013 0 2019 Company Apache Uber Netflix Hortonworks, Facebook Twitter, Cloudera Databricks Language support java, c++, c#, c, python, javascript, perl, ruby, php, rust java, scala, c++, python java, scala, c++, python, r, php java, scala, c++, python, r, php, go scala, java, python, rust Use cases Stream processing, Analytics, Efficient data exchange Incremental data processing, Data upserts, Change Data Capture (CDC), ACID transactions Write once read many, Analytics, Efficient storage, ACID transactions Write once read many, Analytics, Efficient storage, ACID transactions Write once read many, Analytics, Efficient storage, Column based queries Write once read many, Analytics, Efficient storage, ACID transactions Is human readable no no no no no yes no Orientation row column or row column or row row column row column Has type system yes yes yes yes yes no yes Has nested structure support yes yes yes yes yes no yes Has native compression yes yes yes yes yes no yes Has encoding support yes yes yes yes yes no yes Has constraint support no yes no no no no yes Has acid support no yes yes no no no yes Has metadata yes yes yes yes yes no yes Has encryption support no maybe maybe yes yes no maybe Data processing framework support Apache Flink, Apache Gobblin, Apache NiFi, Apache Pig, Apache Spark, Apache Spark, Apache Flink, Apache Drill, Apache Flink, Apache Gobblin, Apache Pig, Apache Spark, Apache Flink, Apache Gobblin, Apache Hadoop, Apache NiFi, Apache Pig, Apache Spark, Apache Beam, Apache Drill, Apache Flink, Apache Spark, Apache Beam, Apache Drill, Apache Flink, Apache Gobblin, Apache Hive, Apache NiFi, Apache Pig, Apache Spark, Apache Drill, Apache Flink, Apache Spark, Analytics query support Apache Impala, Apache Druid, Apache Hive, Apache Pinot, AWS Athena, BigQuery, Clickhouse, Firebolt, Apache Hive, Apache Impala, AWS Athena, BigQuery, Clickhouse, Presto, Trino, Apache Impala, Apache Druid, Apache Hive, AWS Athena, BigQuery, Clickhouse, Dremio, DuckDB, Presto, Trino, Apache Impala, Apache Druid, Apache Hive, Apache Pinot, AWS Athena, BigQuery, Clickhouse, Firebolt, Presto, Trino, Apache Hive, Apache Impala, Apache Druid, Apache Pinot, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, DuckDB, Firebolt, Apache Impala, Apache Druid, Apache Pinot, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, DuckDB, Firebolt, Apache Hive, AWS Athena, Azure Synapse, BigQuery, Clickhouse, Dremio, Presto, Trino,"},{"location":"job-orchestration/","title":"Job orchestration","text":"<p> Apache Airflow</p> Attribute Apache Airflow Name Apache Airflow Description Apache Airflow is a platform to programmatically author, schedule, and monitor workflows. License Apache license 2.0 Source code https://github.com/apache/airflow Website https://airflow.apache.org/ Year created 2014 Company Airbnb, Apache Language support python Use cases Workflow scheduling Job orchestration N/A"}]}
Binary file modified site/sitemap.xml.gz
Binary file not shown.

0 comments on commit b01a9b3

Please sign in to comment.