Why use Arc 

What's so special about Code Generation?

It is all about what is being generated.

The generated code uses our Fusion Core Frameworks and APIs to tightly integrate data access code with distributed caching and search indexing. Code behavior does not change when our system evolves because our general-purpose implementation is designed to evolve independently from the database schema.

The simplicity of the SQL used in our generated code ensures that code behavior does NOT change over time. In fact, SQL is never used 99.99% of the time. Almost all reads hit the cache, and all search is done on the search indexes.

This means that developers no longer need to manually handle distributed caching or search indexing in their code.

What are the implications of automated caching & search indexing?

Developers do NOT need to manually handle caching. Our Data Source objects are hyper scalable entities which seamless ready from the cache, and fall back to database reads when the cache gets stale (which does not happen very often).

With a little bit of code, developers can also specify the search index data, and have changes automatically synchronize between the database and the search index. It is important to note that most companies have teams of developers to handle search and fix caching bugs. All of that goes away with Arc.

What are the implications of this system for business logic?

Business logic can be clean and developers can use a grab-and-go, object-oriented approach for data access. They no longer need to worry about how and where the data is coming from - it is just there. This also means no more SQL and database tuning.

What is the relevance of relational databases in the modern software?

It would be a mistake to ignore relational databases which are a mature technology and are still useful for storing important, slow-changing data. You do not want to store data related to users, security, permissions & customers (just to mention a few), in a database which does not guarantee referential integrity and consistency.

Our system improves the performance and scalability of relational databases to rival that of or exceed the performance of NoSQL databases in relevant scenarios which follow our architectural guidelines.

How does this help enterprises?

We can make relational databases smaller, 70% cheaper, scalable and simpler to interact with. This is relevant to large enterprises which host SaaS applications that are storing an ever growing amount of data because of Digital & AI Transformation.

Customers want better performance, and SaaS hosting providers need better scalability from their existing relational databases without spending millions of dollars in hardware, software and engineering resources to make that happen. We can help because our system works in essence like a CDN for relational databases.

Relational Databases using our system can be small because it is utilized mostly for writes. All reads are done from the cache, all search is run off the search indexes & Big Data is stored in NoSQL databases which are significantly more cost effective at scale than their relational counterparts.

We provide all the glue for integrating these systems together to work as a cohesive whole for our customers.

What about reports?

NoSql databases can be configured to run complex reports in a very scalable and cost-effective manner. Data within the incoming stream can be aggregated in real-time, in a very scalable manner to generate report data. Reports can then be run off these aggregate collections.

For conventional use cases which demand complex SQL, customers are encouraged to use Data Warehouses or modern systems like Snowflake.

Can you be more specific about the cost savings of this system?

A typical production SQL Server would cost several thousands of dollars per month in production. Using our system, it can cost as little at $750 per month in production even with customers actively using the system.

Azure typically recommends you to use VMs which cost $450 or more per month within clusters like Service Fabric. Costs increase significantly when you grow to hundreds of VMs which can happen very fast. Using Arc, you can run workloads on very small VMs with 2 or 4 cores, and 8GB or 16GB RAM, which also have higher latency network connections. This can cost as little as $100 - $200 per VM per month, resulting in significant cost savings.

Whether you are using databases, NoSQL clusters, or VMs, we recommend using the smallest instance types available, and increase only what is necessary based on actual load. As an example, you maybe able to stay on an M10 cluster size with MongoDB, but only increase the disk size, because Disk I/O was detected as being too slow. Also, you maybe able to continue using a 2-core VM because only your memory usage increased, so you only had to upgrade to a VM with more memory allocated to it.

This is significantly more cost-effective than conventional recommendations from public cloud support personnel who typically advise using very large VMs when there are issues. They do this because conventional software is not optimized for cost effectiveness and performance, while Arc is.

 

We are a Research Company

Our mission is to advance data-driven Software Development by fundamentally reimagining it, leveraging decades of industry experience to transcend current limitations and challenges.

We will achieve this goal by offering reusable, industry-agnostic services and development tools designed with two key goals in mind: simplifying code and providing a robust foundation for all aspects of building data-driven applications. Our aim is to empower developers to focus on building the application rather than the infrastructure behind it.

Our Research focuses on continuous improvement in areas like Caching, Search, Data Access, Multi-threaded Programming, Programming Language Capabilities, Configuration Management, State Handling, and Logging.

Address


Boston, Massachusetts