Hardware COVERAGE FROM AROUND THE WEB
Five years ago, an IBM-built supercomputer designed to model the decay of the US nuclear weapons arsenal was clocked at speeds no computer in the history of Earth had ever reached. At more than one quadrillion floating point operations per second (that's a million billion, or a "petaflop"), the aptly-named Roadrunner was so far ahead of the competition that it earned the #1 slot on the Top 500 supercomputer list in June 2008, November 2008, and one last time in June 2009.Today, that computer has been declared obsolete and it's being taken offline. Based at the US Department of Energy's Los Alamos National Laboratory in New Mexico, Roadrunner will be studied for a while and then ultimately dismantled. While the computer is still one of the 22 fastest in the world, it isn't energy-efficient enough to make the power bill worth it."During its five operational years, Roadrunner, part of the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program to provide key computer simulations for the Stockpile Stewardship Program, was a workhorse system providing computing power for stewardship of the US nuclear deterrent, and in its early shakedown phase, a wide variety of unclassified science," Los Alamos lab said in an announcement Friday.
To those in the storage world who rejoice in being in-the-know about the ever shifting technology and vendor landscape in front of them, Gartner Magic Quadrants are seen as major events in the “Vendor Olympics” that our industry can often devolve into. Now, by combining multiple disparate storage-related Magic Quadrants into one review of General-Purpose Disk Arrays (made publically available by HDS for you, here) it seems Gartner has created the decathlon of the storage Vendor Olympics.And while you might be a fan or a detractor of Gartner’s methodologies, it does measure two vectors of actual importance to storage customers: a vendor’s ability to execute and its completeness of vision. (Or, in my own plain English – “Can they do what they say?” and “Can they correctly anticipate customer and market needs?”) While my perspective comes squarely on the vendor side, those do seem like pretty appropriate areas to focus on.Given the new, broader focus of this Magic Quadrant and significant industry chatter that can follow any new Gartner commentary, it seemed relevant to add some thoughts and perspective about it.First and foremost, the overall positioning of the vendors “feels” about right. There are the expected “Leaders” (with Hitachi/HDS among them) that have built, bought or partnered their way to the top of the pack. While individual positions could be argued, I doubt there were that many surprises in the “Leaders” quadrant.
The amount of buzz generated by the supposed Paypal exodus from VMware to OpenStack via Business Insider has been interesting to say the least. It sort of reads like one of those Weekly World News parody articles that I used to pick up as a kid for a few laughs. For those not following the adventure, BI basically stated a quote from Boris Renski, EVP at Mirantis, that Paypal is doing a rip and replace of 80,000 servers from “VMware” to OpenStack via FUEL, beginning with 10,000 servers this summer.I use my magic quote fingers around VMware because this is almost impossible to decipher – you don’t install “VMware” on something. For example – does the article specifically mean the ESXi hypervisor is being replaced with KVM or does the article mean specifically that only vCloud is being removed in favor of OpenStack’s Nova on top of ESXi? The two examples are entirely different and I’m surprised that no clarification was offered in the article by Business Insider. The title of the article could have easily just been “EVP Exaggerates; Film at 11″ without being too far off the mark. It was later revealed by Adrian Ionel, the CEO of Mirantis, that his EVP Renski was in fact “exaggerating the use case” with only second hand knowledge of the project. BI didn’t even bother adding “updated” to their title page, and just snuck it down at the bottom where you may or may not stumble upon it. Fortunately, Forbes is a little better at staying on the ball and understands how to better inform their readers that an article has changed after posting it.
Rackspace shelled out an undisclosed sum for Exceptional Cloud Services, a San-Francisco-based startup that develops solution for Redis. Redis is a popular open source key-value store that is designed to support large databases which process transactions at very rapid speeds – in other words, Big Data.As a part of the deal, Rackspace is acquiring the rights for Exceptional Cloud Services’ three software products: the Exceptional.io error tracking engine, a supplementary visualization platform called Airbrake.io, and the Redis to Go management automation framework. The company says that these solutions are used by over 50,000 developers worldwide.Besides acquiring the IP, Rackspace is absorbing Exceptional Cloud Services’ talent. The startup’s R&D muscle will be moved to the cloud host’s San Francisco branch, and its sales and marketing department will be integrated into Rackspace’s San Antonio office.“Our focus on the developer community is deeper than ever before and we are carefully adding strategic technologies to our open hybrid cloud portfolio, while keeping the needs of application developers in mind,” said Pat Matthews the senior vice president of corporate development at Rackspace.“Through the acquisition of Exceptional Cloud Services, we’re gaining technology and expertise that will provide startups and cloud developers with the tools that help them deliver more reliable customer experiences and to bring the next generation of cloud-based apps to market faster.”
Seagate is developing an internal desktop hybrid hard drive with a flash cache to speed PC boot and application load times to near-SSD levels at little more than plain ordinary disk prices. It's also rebranding its Momentus XT notebook drives as Laptop SSHDs and introducing a single platter laptop Thin SSHD.This third generation Seagate hybrid drive development comes shortly after Seagate decided to stop producing disk-only Momentus drives.The Desktop SSHD (Solid State Hard Drive), which we think is an evolution of the existing Desktop HDD, will store up to 2TB of data on its platters and have an 8GB cache. Seagate says it picks out which data to stick in cache using Adaptive Memory Technology, as used on its Momentus XT hybrid hard drive, now called the Laptop SSHD.That means high-access-rate data goes into the flash cache and is served up at NAND speed with no delay caused by seek time as a disk read head locates a data track and waits for the desired blocks to spin round to it.Seagate says Windows 8 can boot in less than 10 seconds from the Desktop SSHD, while Windows 7 makes a bit of a muck of it, only getting down to 12 seconds. Seagate says its own fast boot technology is integrated or combined with Windows 8's fast boot technology, hence the 2 second saving.
IMB has just announced its plans to embrace OpenStack in an effort to help enterprises better balance private and public cloud resources. The new initiative will see Big Blue switching all of its cloud services and software to open-cloud architecture, allowing customers to easily alternate between equipment and service vendors, and do away with any worries of vendor lock in.To get the ball rolling in this newest venture, IBM plans to set up a new private cloud service based on OpenStack, which will be introduced alongside new software called SmartCloud Monitoring Application Insight that’s able to monitor cloud application availability and progress in real-time. In addition, it also plans to release two more applications, although these remain in beta.We’ve known for some time that IBM has had big designs in the cloud services arena what with projects like MobileFirst and its various “Smart” initiatives, and with good reason – this year the market for cloud services will top $130 billion, according to the latest forecast from Gartner. Moreover, IBM already has a big presence in the cloud of sorts, with around 5,000 of its customers currently running private or mixed public-private clouds.IBM’s move is in sharp contrast to its origins, when it was one of the chief practitioners of the so-called lock-in strategy. But IBM is a company that likes to move with the times, and it understands that the cloud computing industry will soon to have to embrace an accepted standard if it’s ever going to move forward.
Numerous sources have confirmed that Evan Powell, CEO of Santa Clara-based Nexenta will step down today as CEO under undisclosed circumstances. He assumes the position of CSO in the company and had served as its CEO since 2007. Privately held and founded in 2005, the company brings an open philosophy to software-defined storage and uses its community model to become a storage force. Network storage veteran Bridget Warwick also joins the company as its CMO, chief marketing officer.Nexenta has been making waves with major partnerships and touts over 5000 customers, with an emphasis in the finance sector. Their total sales are approaching half a billion dollars and projected to be over a billion dollars this year, and go to two billion dollars in 2014. The explosive growth echoes the rise of software-defined datacenters, delivering major savings and capabilities at the same time. As a software-only platform, the heterogeneous capabilities result in not only high scalability, but also maintain high-performance, which is critical in high demand scenarios such as virtualization and VDI. At a time where it is commonly estimated that more than half of IT budgets are spent in big data, cloud and virtualization, this is where Nexenta shines with their ZFS-based storage solutions that break the boundaries of proprietary and slow storage systems. The company competes with NetApp and their Data ONTAP products.
Tape library vendor SpectraLogic says it installed 550PB of tape library capacity in the second half of 2012 and reports that its revenues, led by rising T-Finity library sales, for that six months were up 9 per cent compared to a year ago.Half an exabyte of tape equates to the installation of roughly a dozen of the vendor's high-end T-Finity libraries in the period - each library has a 45PB raw capacity maximum with LTO-5 tapes. A base unit cost $218,000 in 2011. Let's do some rough-and-ready math and up that to $225,000 for inflation and then triple it for a high-capacity box - which would give us $675,000 apiece. A dozen of these would give us $8.1m, which seems low for a six monthly revenue period for SpectraLogic. It is very rough and ready and my gut says the number should be $10m or more. As a private company, it doesn't have to publish its financial statements, so we can't tell for sure.During the year the NCSA’s Blue Waters supercomputing system had at least four Spectra T-Finity libraries installed and it's understood though not publicly stated that Amazon is using Spectra T-Tinity tape libraries to provide the storage in its Glacier cloud archive storage facility.
What with everyone else jumping on the Hadoop bandwagon at Strata this week, Intel clearly doesn’t want to be left out. But instead of simply joining the club, the chip maker seems to have stolen the show altogether, announcing its own Apache Hadoop distribution this morning at an invitation only event.Intel said that it plans to offer its very own flavor of Hadoop, the hugely popular open-source Big Data software that allows applications to be run on large clusters of servers.“We’re in an era of generating huge amounts of data, but the key point is not what we get out of it” explained Boyd Davis, VP of Intel’s Architecture Group.Rather, Boyd stressed that what really matters is big answers, adding that through them Big Data has almost limitless potential to transform our societies, from managing energy supplies to personalized healthcare services.For Intel then, its Hadoop distribution has the same purpose as those offered by the likes of HP, Cloudera and Greenplum – to enable businesses to make better decisions and identify possible security threats quickly.But Big Data is rapidly becoming a very competitive space, and questions will be asked if we really need another open-source Hadoop distribution platform? With similar announcements from Cloudera, Hortonworks, which has just put out a Windows version of HDP, and EMC’s Greenplum this week, the Hadoop space is quickly becoming a very crowded one with a dozen or so top companies offering what are pretty similar platforms.
Few open-source tools have enjoyed the meteoric popularity of Hadoop in building these next-generation big data analytics platforms. Even in its rawest distro form, it's eminently flexible, scalable and very cost-effective. As a result, Hadoop has quickly become the new de-facto standard for anyone doing anything in big data analytics computing.We believe that HDFS (the underlying data abstraction beneath Hadoop) will play a key role as the future "data substrate" for next-generation data infrastructure. The familiar relational database that's powered data-based processing for the last few decades will likely be subsumed around newer capabilities built on top of HDFS.To really appreciate why one well-understood approach to managing data might be subsumed by a newer approach, it's important to understand the context. At the highest level, there's a pronounced shift to digital business models, ones where the entire value proposition centers around gathering, storing, analyzing and leveraging massive amounts of information. The words may change, the underlying concepts don't. Study the people who are now doing things the new way, and their information management patterns are markedly different than before. For one thing, they're strongly incented to keep as much data around as humanly possible, preferably in its native and raw form with as much fidelity as possible.
Browse videos from the previous page, including the homepage feed, channel videos and search results. Sign in with your YouTube Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to like NetAppTV's video. Sign in with your YouTube Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to dislike NetAppTV's video. Sign in with your YouTube Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to add NetAppTV's video to your playlist.
Flash storage is a huge game-changer for your business. Not just for desktop and portable computing, but also in data centers and the cloud.Flash is incredibly fast, and it’s become much less expensive. It’s offering amazing new opportunities for businesses—whether they run their IT in traditional data centers, or on cloud-computing services.Recently, a friend was at a meeting of executives, where “The Cloud” was the big topic of excited conversation. What would happen to data storage? Would companies still have data centers? And so on. My answer: As long as there’s data, we’ll store and protect it. And if we save and store data, we’ll always want to get at it.Moreover, we all amass more and more data—all the time. So, fast access to this growing data store is critical for tomorrow’s business.The important thing to know is that your data may be in a data center, but not your data center. (Hopefully it’s in more than one data center, to prevent downtime disasters.)And it’s probably a shared data center. It’s not owned by your company, but by Google, Amazon, Logicalis, or any of a dozen other companies. This is what people call a public cloud, or often simply the cloud.
The Apache Hadoop platform has been ascendant in recent years, thanks to its flexibility, rich developer ecosystem, and ability to suit the analytics needs of developers, web startups and enterprises. Hadoop is on track to become prevalent in 2013, with Gartner predicting it will be embedded in two thirds of Big Data analytics products by 2015. A new eWeek slideshow details the reasons why. eWeek cites Hadoop’s ability to analyze data in real-time, its cost-effectiveness in relation to the value of predictive analytics, Hadoop-optimized hardware, and the increasing number of practitioners with Hadoop expertise. This will drive the demand for data scientists, as suggested by a PC World article that states, "A keyword search for Hadoop on the IT careers site Dice.com turned up nearly 1,000 jobs posted within the past month, many from software vendors."Paul M. Davis tells stories online and off, exploring the spaces where data, art, and civics interconnect. He currently works with organizations including Greenplum, Pivotal Labs, and the Center for Theater Commons. Previously he edited Shareable Magazine, and served as Content Manager and Strategist at the School of the Art Institute of Chicago.
Today Cisco further elaborated more details about the Cisco ONE controller currently in beta. The announcement specifies new SDN applications and hardware support for existing products to market. Also announced, is the new Nexus 6000 switching product, along with a NAM module for the Nexus 7000. The Nexus 6000 is 96-ports of 40Gb low latency data center switch. Lastly, the Nexus 1000v support for Hyper-V integration into Microsoft System Center VM Manager and VXLan gateway support, both calculated moves in high stakes, data center virtualization chess. The big announcement is the software and will be for some time to come.Cisco has publicly committed to supporting OpenFlow since its announcement of the onePK last year. In order to do flow based forwarding, it only makes sense for consumers to expect a common mechanism to interact with hardware. The value should be well up the stack, but to get a proper stack, software developers need a common set of primitives.There are some glimpses of deployment strategies from Cisco. Anything other than niche cases appear to be leveraging proactive flow instantiation. Reactive models on the above diagram allude to embedded systems.“We want to meet developers where they are—this is not a controller tied to any particular technology,” said Shashi Kiran, senior director of data center and cloud networking at Cisco. “OnePK can take advantage of our hardware features [but] OpenFlow needs to work against a more abstract switch model,” – EETimes.com http://eetimes.com/electronics-news/4406200/Cisco-packs-ASICs-at-40G–phbraces-OpenFlow
In order to better gauge current usage of, interest in, and familiarity with desktop virtualization technology, ESG surveyed North American IT decision-makers responsible for overseeing their organizations’ desktop and mobile computing strategies.
We are pleased that as of January 30, 2013, NetApp has achieved quality assurance and certification for Rackspace Private Cloud. This designation indicates that after a detailed, automated testing and validation process, NetApp storage and data management solutions have achieved readiness for building a Private Cloud based on Rackspace Private Cloud Software.As the interest to deploy private cloud continues to ramp up, enterprise customers must navigate an ever-rising sea of choices between hardware, software, and cloud management solutions. With a flood of start-ups, open-source initiatives, and established players vying for attention – this can be an overwhelming task indeed!Within the cloud management space, there has been a growing interest in open source management software, leveraging the low- (or no-) cost ability to develop a private cloud. In particular, over the past year NetApp has seen strong interest in deploying OpenStack, a leading open source cloud system software offering.There is no doubt that OpenStack represents one of the most significant opportunities in open source cloud management. However, deploying open source in the enterprise can often entail tradeoffs – the need to test new architecture despite trying to minimize deployment time, or the need to balance burgeoning capabilities with battle-proven results. Much like the rise of Linux within the enterprise over the past decade – the ongoing and inevitable enterprise-hardening of OpenStack is causing a growing number of organizations to consider it as a viable cloud option.
Cloud computing including cloud storage and services as products, solutions and services offer different functionality and enable benefits for various types of organizations, entities or individuals. Public clouds, private clouds and hybrids leveraging public and private continue to evolve in technology, reliability, security and functionality along with the awareness around them. Cloud concerns range from security, compliance, industry or government regulations, privacy and budgets among others with private, public or hybrid clouds. Peer, cooperative (co-op), consortium or community clouds can be a solution for those that traditional public, private, hybrid, AaaS, SaaS, PaaS or IaaS do not meet their needs. From a technology standpoint, there should have to be much if any difference between a community cloud and a public, private or hybrid. Instead, they community clouds are more about thinking outside of the box, or outside of common cloud thinking per say. This means thinking beyond what others are talking about or doing and looking at how cloud products, services and practices can be used in different ways to meet your concerns or requirements.Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
This post attempts to provide a framework for that discussion, in which I’ll argue that the platform for the next generation data center network has already taken shape. It’s called Network Virtualization, and it looks a lot like the networking platforms we’re already familiar with.Over the last 15 years the networking industry has been challenged with tremendous growth in all areas of the infrastructure, but none more challenging than the data center. As networking professionals we have built data centers to grow from tens, to hundreds, to thousands of servers – all while undergoing a transition to server virtualization and cloud computing. How on earth did we manage to make it this far? Platforms.As networking professionals we have relied on these modular switching platforms as a foundation to build any network, connect to any network and any host, meet any requirement, and build an architecture that is both scalable and easy to manage with L2-L4 services. This is evident by observing the phenomenal success of the modular chassis switch, a marvel in network engineering for architecting the physical data center network.There have been many different modular switching platforms over the years, each with their own differentiating features – but the baseline fundamental architecture is always the same. There is a supervisor engine which provides a single point of configuration and forwarding policy control (the management and control plane). There are Linecards (the data plane) with access ports that enact a forwarding policy prescribed by the supervisor engine. And finally there are fabric modules that provide the forwarding bandwidth from one linecard to the next.
The neutral communications medium is essential to our society. It is the basis of a fair competitive market economy. It is the basis of democracy, by which a community should decide what to do. It is the basis of science, by which humankind should decide what is true. Let us protect the neutrality of the net. – Tim Berners-LeeInventor of the World Wide Web Over the next 10 years the networking industry is going to transform much more than it has in the previous ten year. The catalyst as we know it today is called SDN, but what that really means is its time to bring networking at least into the same decade conceptually as the rest of the computing world. Networking will move away from being a collection of loosely coupled monolithic devices, into a more layered and programmatic construct that will resemble the virtualization of the x86 market. With this new found flexibility will bring new ways of controlling the traffic that today essentially drops in the mailbox and is never seen again. I propose that network service providers (NSPs) and mobile network operators (MNOs) will be able to skirt around any net neutrality regulations by exploiting emerging technologies. With great power, comes great responsibility.
I think it all starts with customer adoption, which has been quite rewarding. What I find interesting is the incredible breadth and diversity – almost every industry, customer size, geography, etc. Some storage vendors have to focus on only one or two niches; we get to work with just about everyone. And – thankfully – they’ve told us globally that they’re very satisfied with what we’re doing for them. That's strong praise from a notoriously tough crowd. Of course, there’s room to do better, which we’re always working on.Right behind that, I’m absolutely delighted with partner adoption: they’ve taken a very strong product, and added all sorts of incredible value around it to satisfy their customers – and their businesses are growing as a result. That’s great to see, and I hope it continues.From a pure product perspective, we’re in a strong position as well. For example, my team focuses a lot on availability as seen by the customer, and we’re now moving beyond five 9s. Just to put that in perspective, that’s getting to be less than 5 and a half minutes of downtime per year on average. Very good, but there’s always room to do better.The VNX is somewhat unique in that it can use the same flash drives as both a storage target or a persistent read/write cache. What the customer sees is simply amazing performance across Oracle, Exchange, SQLserver – just about any application that beats on data intensely.
Managing the Big Data growth problem in the enterprise is tough. While IT departments grapple with how to support complex Big Data environments, legal teams are tasked with making accommodations for Big Data in the already expensive e-Discovery process. How do legal teams account for emerging data types and do it all with fewer resources and budget spend? This webinar discusses how to best to retain, access, discover and ultimately delete content in compliance with evolving regulations and case law.Mr. Taylor leads CommVault’s Information Management business including Information Risk, e-Discovery, Compliance and Information Search. Over the last 20 years he has worked for many large multi-national while building deep expertise in a range of topics including business intelligence, data warehousing, application and information management. Most recently, he has gained specific experience in data retention and archiving, working for or with some of the leading companies in this field to gain specialist knowledge in information risk and compliance.
After posting the article “Choose link aggregation over Multi-NIC vMotion” I received a couple of similar questions. Pierre-Louis left a comment that covers most of the questions. Let me use this as an example and clarify how vMotion traffic flows through the stack of multiple load balancing algorithms and policies: A question relating to Lee’s post. Is there any sense to you to use two uplinks bundled in an aggregate (LAG) with Multi-NIC vMotion to give on one hand more throughput to vMotion traffic and on the other hand dynamic protocol-driven mechanisms (either forced or LACP with stuff like Nexus1Kv or DVS 5.1)? Most of the time, when I’m working on VMware environment, there is an EtherChannel (when vSphere
Wikibon is a professional community solving technology and business problems through an open source sharing of free advisory knowledge. For the moment, the world’s largest data repositories are measured in multiple petabytes which have already strained the limits of traditional IT architectures and infrastructures. The move to hyperscale computing, advanced by the likes of Amazon, Facebook and Google, promises to change the way large IT shops and cloud service providers manage and deliver their data in the near future when exabytes of data will need to be served up to users or customers in a fraction of a second. The cost of commodity servers and storage are continually dropping. However, demand for additional compute power and, in particular, high performance storage along with operational expenses is outpacing any savings advantage. But the bigger issue is speed or time to first bid or byte. Amazon estimates that just a one second delay in page-load can cause a 7% loss in customer conversions. Put another way, Amazon estimated that every 100 milliseconds of latency cost them 1% in sales. Google found an extra .5 seconds in search page generation time dropped traffic by 20%. A broker could lose $4 million in revenues if their electronic trading platform is 5 milliseconds behind the competition.
Listening to the ‘Speaking In Tech’ podcast got me thinking a bit more about the software-defined meme and wondering if it is a real thing as opposed to a load of hype; so for the time being I’ve decided to treat it as a real thing or at least that it might become a real thing…and in time, maybe a better real thing?The role of the storage array seems to be changing at present or arguably simplifying; the storage array is becoming where you store stuff which you want to persist. And that may sound silly but basically what I mean is that the storage array is not where you are going to process transactions. Your transactional storage will be as close to the compute as possible or at least this appears to be the current direction of travel.But there is also a certain amount of discussion and debate about storage quality of service, guaranteed performance and how we implement it.This all comes down to services, discovery and a subscription model. Storage devices will have to publish their capabilities via some kind of API; applications will use this to find what services and capabilities an array has and then subscribe to them.
Based on a recent customer inquiry with regards to the use of DRS Affinity rules with vSphere Storage Appliance (VSA) 5.1 environments I wanted to shared with everyone my response based on this topic. Understanding the functional capabilities of the vSphere features and how they integrate with the different solutions VMware provides it’s important and therefore I’ve decided to share this with the rest of the community. The question was based on a real customer scenario and fairly long as it involved other VMware products. I’ve condensed the question to the main concern below:“Is it necessary to create DRS affinity rules to pin the vSphere Storage Appliance to each host in a VSA Cluster in order to prevent DRS from migrating the appliances to other ESXi hosts?”I believe this to be a valid concern and a great question as the customer clearly understands the business problem DRS is design to address, which is basically to mitigate contention between workloads in a DRS enabled cluster. The answer to the question is “No”, you don’t need to create DRS Affinity rules in order to prevent DRS from migrating the vSphere Storage Appliance VMs from one ESXi host to another, and here is why:
- I Can See Clearly Now - E2 Conference Boston
- Get practical strategies to build a solid plan for profitability and success - Mobile Commerce World - Mobile Commerce World
- Learn how to enage customers through mobility - Mobile Commerce World - Mobile Commerce World
- Learn how to best integrate mobile commerce with your current systems -- Mobile Commerce World - Mobile Commerce World
- How to Choose a SaaS Vendor - E2 Conference Boston
This Week's Issue
Free Print SubscriptionSubscribe
Current Government Issue
In this issue:Subscribe Now
- The Government CIO 25: These influential and accomplished government IT leaders are finding ways to be cost efficient and still innovate.
- Rethink Video Surveillance: It's not just about networked cameras anymore. New technology provides analytics, automation, facial recognition, real-time alerts and situational-awareness capabilities.
- Read the Current Issue