Elisa network data analytics platform
Technologies used are Apache Airflow, Google Cloud Storage, BigQuery, Tensorflow and Spark.
Bring high-profile Data Engineers to your organization that will scale your capabilities when needed.
Trusted by governments, leading telecoms and banks
At MindTitan we are experienced in developing data platforms on-premise and in the cloud for big Enterprises like Elisa, Banglalink, and Startups like Hepta using the right technologies for the types of data, analytical needs and business processes. We understand the specific requirements of analytic workloads, how they differ from the operational workloads that most information systems are designed for and what are the best technologies available to store and process the data.
We provide the architectural design for a data platform depending on your required use cases, leaving the design flexible enough to support changing requirements and new use cases in the future.
We design the platform from both the hardware and software viewpoints, considering the performance the underlying hardware can provide and the workloads that the software has to support. The technological landscape is in constant development as new hardware, cloud services and technologies open the doors for performance increases and completely new use-cases.
Our experienced data engineers are familiar with the technologies widely used in the cloud and on-premise data architectures.
We can realize a data architecture from scratch or develop additional capabilities to an existing data platform, such as data storage layer developments (data lakes and warehouses), data pipelining (ETL jobs or complex batch processing of data) or analytical components (BI tools and AI model deployment).
Businesses often collect data in various distinct locations and technologies, such as on-premise relational databases, CRM tools, analytics tools, object storage and so on.
This may work well for operational tasks, but to develop analytics tools and AI models to generate insight from these data, it’s often necessary to integrate said data to a common platform where analytical and operational workloads can be kept separate and the data from various sources used together.
We work with our partners to understand the nature of said data, develop an integration strategy and realize this in either a new architecture or in an existing one.
Different data and different use cases require different storage technologies. Whether it’s structured data that should be kept in a data warehouse in columnar format or unstructured data like images, video and audio, which is better kept in a data lake.
We design the appropriate storage system with fast interconnects that enable the analytics tools to access the data efficiently, providing the required performance.
Data pipelines are used for ETL jobs, and batch processing of data in analytics and machine learning workloads.
Good data pipelines are performant, robust and lend themselves well to monitoring and extending when requirements change.
The term itself is loosely defined, but quite clear from the perspective of the challenge – handling big data requires different, and far more complex, tools from small data.
At MindTitan we’ve had to deal with all kinds of data and have the experience to know when using the more complex toolset is merited and worth the extra cost in development and maintenance.
And if you really need it, we can help you make the right choices and build the system that helps you solve your problems.
We use Kafka, Kinesis, Pub/Sub or similar for data ingestion and initial processing.
For simpler workflows like batch processing, we use data pipelining tools such as Apache Airflow and different data source connectors or cloud platform tools such as AWS Glue, Lambda and Data Pipeline.
For data lakes and unstructured data, we use Object Storage solutions when possible and HDFS when required by downstream processing.
For relational data columnar data formats like ORC or parquet with a query engine like Presto or a managed solution like BigQuery work wonders, but at times PostgreSQL with columnar data storage will do just fine.
For orchestration Apache Airflow on-premise and AWS lambda with event triggers or GCP Dataflow are our tools of choice.
The processing itself is handled by various tools from Spark for big data to Tensorflow for deep neural networks.
We work mainly in the Python and Linux ecosystems and have extensive experience with relevant tools.
What new capabilities are needed, what use cases they must support, what are the performance requirements and technical limitations. This will form the basis for the next steps.
We analyze the infrastructure that you currently use and make a plan on how to utilize this for the current goals or how to extend this to accommodate the new requirements.
Based on our meetings and data analysis, we’ll share with you the possible solutions. We will work hand in hand to agree on the desired outcome.
Your data is valuable, so the tools processing it must be checked to avoid data corruption or loss. Before deployment, we set up everything in a test environment and ensure everything works end to end.
When the system has passed all required tests we can deploy it to live environment. Monitoring will, of course, still be set up to notify us about any inconsistencies before they turn into problems.
All systems require maintenance from time to time and data processing can be especially sensitive in this case, as data changes over time along with your organization and your customers. We are happy to support our customers with ensuring the performance of the system..
A data architect is a person responsible for the data architecture principles and the design of systems that manage and process data.
A data engineer is a specialist proficient in data storage, processing or pipelining technologies or a combination of these. These are the people who implement the components of data architecture, be it storage, processing, or data management systems.
Good decisions are based on data and those decisions can only be as good as the quality of the data underlying them and as prompt as the system’s performance permits. Good data architecture ensures data integrity and monitoring. The right technological choices and architecture allow for fast queries and scalability, allowing people to get answers and run analyses faster.
The answer to this question is specific to a use-case as sometimes using the cloud is cheaper and more efficient while for other use cases on-premises solutions make more sense. It is quite common to use a hybrid solution as well, where some services are in the cloud while others remain on premises. We can help you figure out the optimal solution for your use-case, design it and build it.