Scott Gnau, Hortonworks - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

Scott Gnau, Hortonworks - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

Enhanced video at http://vinja.tv/j0TytKGv 01. Scott Gnau, Hortonworks, Visits #theCUBE!. (00:20) 02. What Has Changed In Two Weeks. (00:44) 03. What Is The Connected Data Platform. (01:15) 04. How Much Time Are You Putting Into Open Source. (02:37) 05. Cloud Seem To Be Cherry Picking Big Data Pipeline, Is The Deck Stacked For Them. (04:34) 06. What Is Your Big Announcement With Pivotal. (06:52) 07. How Do You See Spark Fitting Into Your Road Map. (08:01) 08. Are The Conversations In The States Difference In The States Versus Europe. (09:02) 09. What Do Boards Need To Know About The Change In Security. (10:17) 10. Are Executives Acknowledging That Companies Will Get Hacked. (12:06) 11. What Would You Tell New Prospective Customers About How You're Different. (13:11) 12. What Is One Thing You'd Like To Share About Hortonworks. (14:44) 13. What Is The White Space That You're Going After. (16:29) 14. Was Hadoop Originally Meant To Be A Storage Engine. (17:45) 15. Why Have World Pay Up On The Stage. (18:48) Track List created with http://www.vinjavideo.com. --- --- Hortonworks’ data platform ready for future expansions | #HS16Dublin by Gabriel Pesek | Apr 14, 2016 Among the efforts to introduce Hadoop’s functionality and capabilities to new audiences at this year’s Hadoop Summit in Dublin, those who have already adopted it are looking for ways to expand and upgrade their existing data platform options. Scott Gnau, CTO of Hortonworks, Inc., met with John Furrier (@furrier) and Dave Vellante (@dvellante), cohosts of theCUBE, from the SiliconANGLE Media team, to talk about some of the ways in which Hortonworks is refining and widening its Hadoop operability, how it’s bringing customers in to use it and ways of keeping security tight. Deploying the platform One of the first topics approached by Gnau was Hortonworks’ focus on building a stable data platform for users to utilize for future upgrades, without the need to overhaul everything. As he laid it out, “In a rapidly-changing market … where there are new tools, new requirements, new applications every day, the key thing we think that makes it sustainable is the fact that there is a central platform that can go through this evolution and agility and remain basically the platform so that applications don’t have to be rewritten and rehosted every time there’s a new shining object.” Having established this framework, Gnau continued to discuss what it meant for Hortonworks’ end-users: “Customers who are going down this path with us can futureproof the business by making sure they have a sustainable platform, into which new projects, new objects, new applications can be built, and they know because of our open-community model that we’ll support any of those things that come along.” Connecting the pieces As Gnau noted, for Hortonworks, “There’s a significant effort that goes into building interoperability and the platform,” but it is also trying to establish baseline pre-readiness by having each of the projects accomplish some of that interoperability on their own. As part of that snap-together future, Gnau felt that “one of the things that you’ll see us working on is ways to package that platform to make the simple on-boarding easier.” Gnau expects that this will be a draw to customers looking to do more than just improve efficiency of existing practices, but that there will be a wide mark of improvement in that regard. “Customers who are building data lakes, one of the things they want to do is not only get new and emerging data into the lake, but they want to pull in some of the legacy data from their operational systems to get better analytics. … The DMX-h [a tool for initializing Hadoop ETL] is an ‘easy button’ for that.” Platform security With big announcements regarding a partnership with Pivotal Software, Inc., along with the ability to deploy Pivotal HAWQ for customers, Gnau felt confident that people could see that Hortonworks was “not building individual pieces, but in fact, we’re building a platform.” Beyond this, he also addressed national and continental differences in data handling and privacy and security regulations as part of Hortonworks’ focus on security. He recognized that while security threats continue to evolve, the basic philosophy behind security practices remains the same, and though implementation of those practices was improving, companies should plan for the worst: “Plan on it happening, and build in that response.” To that end, he noted that certain ways of handling internal organization of data storage can do a lot to lessen the impact when security is breached and improve understanding of what was accessed in such a case. @theCUBE #HS16Dublin

Shaun Connolly, Hortonworks - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

Shaun Connolly, Hortonworks - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

01. Shaun Connolly, Hortonworks, Visits #theCUBE!. (00:20) 02. What Is The Strategy For Hortonworks. (01:00) 03. What Dynamics Are Going On In The Industry. (02:42) 04. What Gives You Confidence That You Can Deliver On Your Promise. (04:44) 05. Has Fraud Detection Gotten Better And How Has It Gotten Better. (07:08) 06. Are All The Apps That Are Now Companies The Explosion. (09:21) 07. What Does It Mean For Each App To Have It's Own Context. (09:45) 08. To What Extend Does Hortonworks See Native Services From Cloud Competition. (11:04) 09. How Is Hortonworks Replicating The Simplicity In The Data Pipline. (12:43) 10. Any Update On Your Microsoft Relationship. (13:55) 11. What Is The Message To The New Folks. (14:39) 12. What Does The Board Need To Know About Security. (16:26) 13. What Is The Conversation Difference Between The States And Europe. (18:60) Track List created with http://www.vinjavideo.com. --- --- The move to data consolidation from on-prem to the edge | #HS16Dublin by Marlene Den Bleyker | Apr 13, 2016 The amount of data an organization accumulates has more than doubled over the past year. The enterprise is dealing with siloed data, data stored in the cloud and data at the edge, and consolidation and connecting people with the data they need is the next step for companies like Hortonworks, Inc. Shaun Connolly, VP of corporate strategy at Hortonworks, joined John Furrier (@furrier) and Dave Vellante (@dvellante), cohosts of theCUBE, from the SiliconANGLE Media team, to talk about the need to create a vibrant way of consolidating the data to provide customers with a 360-degree view. Data logistics When asked about the direction of Hortonworks’ upcoming efforts in the Internet of Things (IoT) space, Connolly explained, “Since the tail end of last year we began to make the transformation from a single platform company to a multi-product and solutions company. That’s where we have our emerging products team focused in the areas of data in motion, in particular in the Internet of Things area, whether it’s related to cyber security, connected car and some of the emerging use-cases.” Last August the company acquired Onyara, Inc., the creator of and key contributor to Apache NiFi, to advance its IoT efforts. “I call it a data logistics platform, a FedEx system for data delivery is kind of how I like to look at it,” Connolly said. “It is the foundation of that team — getting the data where it needs to be.” Connecting the data platform According to Connolly, it’s a world of connective data platforms, and those platforms need to reside where they are. He believes that key is to actively orchestrate the data to get it to where it needs to be in a secure and transparent way that you can use to influence the edge. This helps to reshape the data and helps to develop a closed-loop system that is adaptive. Business impact over technology “We are in the age of data, and before we arrived here, in many cases the systems are siloed and highly structured and that is what is different … Your 360 view winds up being not just about the structured world, it winds up being about all of that contextual data that surrounds those transactions,” Connolly stated. He continued to explain that the economics and the ability to capture data are driving the growth of analytics. However, he feels that you need the architecture to get there and noted that there is a need for a vibrant way of getting the data to get the 360 view and it is necessary to interact where the data resides. Message to new on borders Connolly said that successful companies focus on use-cases and that people who are going to be using the Hadoop platform should do the same. He recommended looking at the journey to assembling a single view before getting to predictive analytics. The key to corporate buy-in for those concerned with security is to create a response plan and know that the nature of solving security issues is to change from how things were done in the past. He also discussed advanced machine learning and a finely-tuned, compliant security data lake. @theCUBE #HS16Dublin

Day Two Keynote - Hadoop Summit 2016 Dublin - #HS16Dublin - theCUBE

Day Two Keynote - Hadoop Summit 2016 Dublin - #HS16Dublin - theCUBE

Can Hadoop get past enterprise-grade roadblocks? by Marlene Den Bleyker | Apr 14, 2016 The keynote for day two of Hadoop Summit in Dublin, Ireland, exhibited a high level of technical information with speakers from Yahoo, BMC Software and Hewlett-Packard Enterprise (HPE). Each of the speakers spoke about their company’s work and contributions to the Hadoop ecosystem. The first speaker was Sumeet Singh, senior director of products for cloud and Big Data platforms at Yahoo!, Inc. He spoke about using the Hadoop platform for many years and how the company has come to rely on it to help it push the boundaries of its capabilities. Singh discussed a number of open-source projects that included batch, compute, queries, Apache HBase and other open-source initiatives in which Yahoo participates and the advances that the company has made. There is a framework we call CaffeOnSpark, a phenomenal framework to advance deep learning on existing Hadoop or Spark clusters.” He also explained that CaffeOnSpark also enables the conversion of existing Hadoop and Spark clusters on to a very powerful platform for deep learning without the need to set up a separate cluster or move data back and forth between these clusters. The platform provides server-to-server direct communication that speeds up learning and offers the ability to fully distribute the learning without scalability issues. CaffeOnSpark also supports incremental learning that occurs on top of saved models. The open-source project received an Apache license last month. Singh ran through a number of open-source projects that included batch compute, queries, Apache HBase and other open-source initiatives in which Yahoo! participates, and he also talked about the advances that the company has made with these projects. Moving forward, Singh sees four areas of opportunity in the Hadoop ecosystem: large-scale machine learning, deep learning, a quest for speed to fight for latency and more efficient cluster operations. Herb Cunitz, president at Hortonworks, Inc., next conducted a use-case panel, which shared many of the experiences of enterprise users, including how each customer is gaining value by using the Hadoop platform. Some of the types of projects included connected data platform collective data from smart meters, machine learning and sophisticated analytics Helping the enterprise achieve the highest level of automation Joe Goldberg, solutions marketing consultant at BMC Software, Inc., was on hand to talk about what he called the “backroom stuff,” the behind the scenes activities that enable customers to get more from their data. He referenced that some of the more popular use-cases for Hadoop and Big Data are things like Extract, Transform and Load (ETL) and enterprise warehouse data modernization. Goldberg talked about a platform approach to managing batch. “One of the most important characteristics — as Big Data and Hadoop applications are moving toward the enterprise — is that you have the ability to manage all of your batch processing in a consistent way,” he said. “A single way to visualize and manage across that diversity.” What he hears from customers is that when they are moving Hadoop and Big Data applications into an enterprise context, there is a lot of complex traditional technology that already exists and they want to be able to manage and see the relationships and how all of that processing is coming together. Goldberg said it is necessary to abstract and elevate how you manage batch processing so that you don’t look at the individual technologies but you can look at it from a business perspective. “You still need that deep technical detail and you need to be able to drill down and see all that information, but you want to stay at a high level from a management perspective,” he said. He discussed the need for a platform to be adaptable and extendable. Steve Sarsfield, product marketing manager at HPE, took to the stage and said, “Enterprise-grade Hadoop has enterprise-grade problems.” He went on to discuss the three types of data that Hewlett Packard looks at: business data, machine data (IoT) and human data (facial recognition data), noting that the vision of Hadoop put all this data into the data lake. The problem, according to Sarsfield, is that the data remains in silos. He stated that there is a need to break away from doing things in different clusters and move away from silos. Sarsfield laid out four issues for enterprise-grade Hadoop. First, it is hard to get mature analytics capabilities; secondly, there is a need for specialized skills and software needs to be easy for end users. The next problem is architectural limitations when running complex workloads. And, finally, there are security challenges. Hewlett Packard’s acquisition of Voltage Security is used to provide security for data on Hadoop servers, data going through a company’s Hadoop network, and data in use. @theCUBE #HS16Dublin

Teradata Partner Interview at Hadoop Summit San Jose 2016

Teradata Partner Interview at Hadoop Summit San Jose 2016

David Hill, Open Energi - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

David Hill, Open Energi - Hadoop Summit 2016 Dublin - #HS16Dublin - #theCUBE

01. David Hill, Open Energi, visits #theCUBE!. (00:14) 02. Open Energi: Creating a Distributed Energy Storage System. (00:50) 03. Day to Day at Open Energi. (01:26) 04. Introducing Software that Aggregates Small Amounts of Stored Energy. (02:28) 05. UK Regulation and Availability for Energy Consumption. (04:17) 06. The Big Data Angle for Open Energi. (06:01) 07. The History of Open Energi and their Capital Efficiency. (06:40) 08. IOT and Fever and the Reality Behind Connectivity. (08:31) 09. Securing the Grid: Security is at the Heart of Open Energi. (10:44) 10. The Customer Experience with Hortonworks. (11:50) 11. Thoughts on the Hadoop Europe Show and the Industry in General. (12:59) Track List created with http://www.vinjavideo.com. --- --- Harnessing the power of IoT for cleaner, more efficient and affordable energy | #HS16Dublin by Betsy Amy-Vogt | Apr 13, 2016 The IoT has been heralded as the bringer of many wild and wonderful benefits, from cars that drive themselves in for repairs to fridges that order more ice cream when your supply is getting dangerously low. But British company Open Energi Ltd. is harnessing the benefits of connectivity to bring customers more efficient, more affordable, and, ultimately, cleaner energy. Joining John Furrier (@furrier) and Dave Vellante (@dvellante), cohosts of theCUBE, from the SiliconANGLE Media team, during Hadoop Summit in Dublin, Ireland, David Hill, business development director at Open Energi Ltd., talked about what Open Energi does, how its software works, and the benefits of being a Hortonworks’ customer. Transferring the value back to the consumer Millions of devices have small amounts of stored energy that is currently unused. Open Energi’s software accesses this energy and aggregates and provisions it where and when it is needed. This has the dual benefit of stopping expenses due to peaking demand and providing a revenue stream for customers, as well as smoothing the energy-use pattern across the board for efficiency. The UK’s more liberalized energy system, where customers are able to choose their energy provider, has been instrumental in allowing energy-use experiments. However, “similar things are happening in the USA,” said Hill. Connectivity is the key first step The biggest energy-consuming machines are old and not smart, Hill explained. He gave a case of a large water pump that controlled the levels of water in a dock to keep boats afloat. The control system consisted of a guy who would watch levels and crank the pump up when the levels needed to be raised! A large amount of energy was latent in the pump, but could not be accessed. Although connecting the pump to the Internet is costly, the savings make it worthwhile. As smaller items, like fridges or electric kettles, become connected, the available pool of devices will grow and the opportunities for accessing and efficiently redistributing the energy will grow. “We were an IoT company before we even knew what IoT was,” said Hill, discussing how Open Energi was founded pre-Hadoop. Becoming Hadoop customers was a “huge leap,” and Hortonworks DataFlow services are enabling much more cost-effective integration that has Hill extremely excited about the future. @theCUBE #HS16Dublin

Top Videos -  loading... Change country
Load 5 more videos
 
 
Sorry, You can't play this video
00:00/00:00
  •  
  •  
  •  
  •  
  •  
  •  
  •  
CLOSE
CLOSE
CLOSE