Blogs

Lullabot: Invaders! Securing "Smart" Devices on a Home Network

Drupal Planet - 5 hours 38 min ago

As a part of Lullabot’s security team, we’ve been keeping track of how the Internet of Things plays a role in our company security. Since we’re fully distributed, each employee works day-to-day over their home internet connection. This subreddit reminds us that most “smart” devices are actually quite dumb as far as security goes. With malware like Mirai actively focusing on home IoT devices including cameras, we know that anything we plug in will be under constant assault. However, there can be significant utility in connecting physical devices to your local network. So, my question: is it possible to connect an “IoT” device to my home network securely, even when it has known security issues?

An opportunity presented itself when we needed to buy a new baby monitor that supported multiple cameras. The Motorola MBP853CONNECT was on sale, and included both Wifi and a “regular” proprietary viewer. Let’s see how far we can get.

The Research

Before starting, I wanted to know if anyone else had done any testing with this model of camera. After searching for “motorola hubble security” (Hubble is the name of the mobile app), I came across Push To Hack: Reverse engineering an IP camera. This article goes into great detail about the many flaws they found in a different Motorola camera aimed at outdoor use. Given that both cameras are made by Binatone, and connect to the same remote services, it seemed likely that the MBP853 was subject to similar vulnerabilities. The real question was if Motorola updated all of their cameras to fix the reported bugs, or if they just updated a single line of cameras.

These articles were also great resources for figuring out what the cameras were capable of, and I wouldn’t have gotten as far in the time I had without them:

Goals

I wanted to answer these three questions about the cameras:

  1. Can the cameras be used in a purely “local” mode, without any cloud or internet connectivity at all?
  2. If not, can I allow just enough internet access to the camera so it allows local access, but blocks access to the cloud services?
  3. If I do need to use the Hubble app and cloud service, is it trustworthy enough to be sending images and sounds from my child’s bedroom?
The Infrastructure

I recently redid my home network, upgrading to an APU2 running OPNSense for routing, combined with a Unifi UAP-AC-PRO for wireless access. Both software stacks support VLANs—a way to segregate and control traffic between devices on the same ‘physical’ network. For WiFi, this means creating a separate SSID for the cameras, and assigning it a VLAN ID in the UniFi controller. Then, in OPNSense, I created a new interface with the same VLAN ID. On that interface, I enabled DHCP, and then set up basic firewall rules to block all traffic. That way, I could try setting up the camera while using Wireshark on my laptop to sniff the traffic, without worrying that I was exposing my real network to anything nefarious.

Packet Sniffing

One of the benefits of running a “real” operating system on your router is that all of our favorite network debugging tools are available, including tcpdump. Since Wireshark will be running on our local workstation, and not our router, we need to capture the network traffic to a separate file. Once I knew the network interface name using ifconfig, I then used SSH along with -w - to reroute the packet dump to my workstation. If you have enough disk space on the router, you could also dump locally and then transfer the file after.

$ ssh root@router-ip-or-hostname tcpdump -w - -i igb0_vlan3000 > packet-dump.pcap

After setting this up, I realized that this wouldn't show traffic of the initial setup. That’s because, in setup mode, the WiFi camera broadcasts an open WiFi network. You then have to use the Android or iOS mobile app to configure the camera so it has the credentials to your real network. So, for the first packet dump, I joined my laptop to the setup network along with my phone. Since the network was completely open, I could see all traffic on the network, including the API calls made by the mobile app to the camera.

undefined Verifying the setup vulnerability Let's make sure this smart camera is using HTTPS and keeps my WiFi password secure.

I wanted to see if the same setup vulnerability documented by Context disclosing my WiFi passwords applied to this camera model. While I doubt anyone in my residential area is capturing traffic, this is a significant concern in high-density locations like apartment buildings. Also, since the cameras use the 2.4GHz and not the 5GHz band, their signal can reach pretty far, especially if all you’re trying to do is read traffic and not have a successful communication. In the OPNSense firewall, I blocked all traffic on the “camera” VLAN. Then, I made sure I had a unique, but temporary password on the WiFi network. That way, if the password was broadcast, at least I wasn’t broadcasting the password for a real network and forcing myself to reset it.

Once I started dumping traffic, I ran through the setup wizard with my phone. The wizard failed as it tests internet connectivity, but I could at least capture the initial setup traffic.

In Wireshark, I filtered to https traffic:

undefined

Oh dear. The only traffic captured is from my phone trying to reach 66.111.4.148. According to dig -x 66.111.4.148, that IP resolves to www.fastmail.com - in other words, my email app checking for messages. I was expecting to see HTTPS traffic to the camera, given that the WiFi network was completely open. Let’s look for raw HTTP traffic.

undefined

This looks promising. I can see the HTTP commands sent to the camera fetching it’s version and other information. Wireshark’s “Follow HTTP stream” feature is very useful here, helping to reconstruct conversations that are spread over multiple packets and request / response pairs. For example, if I follow the “get version” conversation at number 3399:

GET /?action=command&command=get_version HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain get_version: 01.19.30

Let’s follow the setup_wireless command:

GET /?action=command&command=setup_wireless_save&setup=1002000071600000000606blueboxthisismypasswordcamera000000 HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain setup_wireless_save: 0

That doesn't look good. We can see in the GET:

  1. The SSID of the previous WiFi network my phone was connected to (“bluebox”).
  2. The password for the “camera” network (thisismypassword).
  3. The SSID of that network.

Presumably, this is patched in the latest firmware update. Of course, there’s no way to get the firmware without first configuring the camera. So, I opened up the Camera VLAN to the internet (but not the rest of my local network), and updated.

That process showed another poor design in the Hubble. When checking for firmware updates, the app fetches the version number from the camera. Then, it compares that to a version fetched from ota.hubble.in… over plain HTTP.

undefined

In other words, the firmware update itself is subject to a basic MITM attack, where an attacker could block further updates from being applied. At the least, this process should be over HTTPS, ideally with certificate pinning as well. Amusingly, the OTA server is configured for HTTPS, but the certificate expired the day I was writing this section.

undefined

After the update had finished, I reset the camera to factory defaults and checked again. This time, the setup_wireless_save GET was at the least not in cleartext. However, I don’t have any trust that it’s not easily decryptable, so I’m not posting it here.

Evaluating Day-to-Day Security

Assuming that the WiFi password was at least secure from casual attackers, I proceeded to add firewall rules to allow traffic from the camera to the internet, so I could complete the setup process. This was a tedious process. tcpdump along with the OPNSense list of “blocked traffic” was very helpful here. In the end, I had to allow:

  • DNS
  • NTP for time sync
  • HTTPS
  • HTTP
  • UDP traffic

I watched the IPs and hostnames used by the camera, which were all EC2 hosted servers. The “aliases” feature in OPNSense allowed me to configure the rules by hostname, instead of dealing with constantly changing IPs. Of course, given the above security issues, I wonder how secure their DNS registrations are.

Needing to allow HTTP was a red flag to me. So, after the setup finished, I disabled all rules except DNS and NTP. Then, I added a rule to let my normal home LAN access the CAMERA VLAN. I could then access the camera with an RTSP viewer at the URL:

rtsp://user:pass@camera-ip:6667/blinkhd/

Yes, the credentials actually are user and pass.

And tada! It looked like I had a camera I could use with my phone or laptop, or better yet at the same time as my wife. Neat stuff!

It All Falls Apart

After a fresh boot, everything seemed fine with the video streams. However, over a day or two, the streams would become more and more delayed, or would drop, and, eventually, I’d need to restart the camera. Wondering if this had something to do with my firewall rules, I re-enabled the HTTP, HTTPS, and UDP rules, and started watching the traffic.

Then, my phone started to get notification spammed.

At this point, I’d been using the cameras for about two weeks. As soon as I re-enabled access to Hubble, my phone got notifications about movement detected by the camera. I opened the first one… and there was a picture of my daughter, up in her room, in her jammies.

It was in the middle of the day, and she wasn’t home.

What I discovered is that the camera will save a still every time it detects movement, and buffer them locally until they can be sent. And, looking in Wireshark, I saw that the snapshots were being uploaded with an HTTP POST to snap.json without any encryption at all. Extracting the conversation, and then decoding the POST data (which was form data, not JSON!), I ended up with a picture.

I now had proof the camera was sending video data over the public internet without any security whatsoever. I blocked all internet access, including DNS, hoping that would still let local access work. It did!

Then, my wife and I started hearing random beeps in the middle of the night. Eventually, I tracked it to the cameras. They would beep every 15 minutes or so, as long as they didn’t have a working internet connection. This killed the cameras for home use, as they’d wake the whole family. Worse yet, even if we decided to allow internet access, if it was down in the middle of the night (our cable provider usually does maintenance at 3AM), odds are high we’d all be woken up. I emailed Motorola support, and they said there was no way to disable the beeping, other than to completely reset the cameras and not use the WiFi feature at all.

We’re now happily using the cameras as “dumb” devices.

Security Recommendations and Next Steps

Here are some ideas I had about how Motorola could secure future cameras:

  1. The initial setup problem could have been solved by using WPA2 on the camera. I’ve seen routers from ISPs work this way; the default credentials are unique per device, and printed on the bottom of the device. That would significantly mitigate the risk of a completely open setup process. Other devices include a Bluetooth radio for this purpose.
  2. Use encryption and authentication for all APIs. Of course, there are difficulties from this such as certificate management, hostname validation, and so on. However, this might be a good case where the app could validate based on a set of hardcoded properties, or accept all certificates signed by a custom CA root.
  3. Mobile apps should validate the authenticity of the camera to prevent MITM attacks. This is a solved problem that Binatone simply hasn’t implemented.
  4. Follow HTTP specifications! All “write” commands for the camera API use HTTP GETs instead of POSTs. That means that proxies or other systems may inadvertently log sensitive data. And, since there’s no authentication, it opens up the API to CSRF vulnerabilities.

In terms of recommendations to the Lullabot team, we currently recommend that any “IoT” devices be kept on completely separate networks from devices used for work. That’s usually as simple as creating a “guest” WiFi network. After this exercise, I think we’ll also recommend to treat any such devices as hostile, unless they have been proven otherwise. Remember, the “S” in “IoT” stands for “secure”.

Personally, I want to investigate hacking the camera firmware to remove the beeps entirely. I was able to capture the firmware from my phone (the app stores them in Android’s main storage), and since there’s no authentication, I’m guessing I could replace the beeps with silence, assuming they are WAV or MP3 files.

In the future, I’m hoping to find an IoT vendor with a security record that matches Apple’s, who is clearly the leader in mobile security. Until then, I’ll be sticking with dumb devices in my home.

Categories: Blogs

Drupal.org blog: What’s new on Drupal.org? - April 2017

Drupal Planet - 6 hours 18 min ago

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

At the end of April we joined the community at DrupalCon Baltimore. We met with many of you there, gave our update at the public board meeting, and hosted a panel detailing the last 6 months worth of changes on Drupal.org. If you weren't able to join us for this con, we hope to see you in Vienna!

Drupal.org updates DrupalCon Vienna Full Site Launched!

Speaking of Vienna, in April we launched the full site for DrupalCon Vienna which will take place from September 26-29th, 2017. If you're going to join us in Europe you can book your hotel now, or submit a session. Registration for the event will be opening soon!

DrupalCon Nashville Announced with new DrupalCon Brand

Each year at DrupalCon the location of the next conference is held as closely guarded secret; the topic of speculation, friendly bets, and web crawlers looking for 403 pages. Per tradition, at the closing session we unveiled the next location for DrupalCon North America - Nashville, TN taking place from April 9-13th in 2018. But this year there was an extra surprise.

We've unveiled the new brand for DrupalCon, which you will begin to see as the new consistent identity for the event from city to city and year to year. You'll still see the unique character of the city highlighted for each regional event, but with an overarching brand that creates a consistent voice for the event.

Starring Projects

Users on Drupal.org may now star their favorite projects - making it easier to find favorite modules and themes for future projects, and giving maintainers a new dimension of feedback to judge their project's popularity. Users can find a list of the projects they've starred on the user profile. Over time we'll begin to factor the number of star's into a project's ranking in search results.

At the same time that we made this change, we've also added a quick configuration for managing notification settings on a per-project basis. Users can opt to be notified of all issues for a project, only issues they've followed, or no issues. While these notification options have existed for some time, this new UI makes it easier than ever to control issue notifications in your inbox.

Project Browsing Improvements

One of the important functions of Drupal.org is to help Drupal site builders find the distributions, modules, and themes, that are the best fit for their needs. In April, we spent some time improving project browsing and discovery.

Search is now weighted by project usage so the most widely used modules for a given search phrase will be more likely to be the top result.

We've also added a filter to the project browsing pages to allow you to filter results by the presence of a supported, stable release. This should make it easier for site builders to sort out mature modules from those still in initial development.

Better visual separation of Documentation Guide description and contents

In response to user feedback, we've updated the visual display of Documentation Guides, to create a clearer distinction between the guide description text and the teaser text for the content within the guides.

Promoting hosting listings on the Download & Extend page

To leverage Drupal to the fullest requires a good hosting partner, and so we've begun promoting our hosting listings on the Download and Extend page. We want Drupal.org to provide every Drupal evaluator with all of the tools they need to achieve success—from the code itself, to professional services, to hosting, and more.

Composer Sub-tree splits of Drupal are now available

For developers using Composer to manage their projects, sub-tree splits of Drupal Core and Components are now available. This allows php developers to use components of Drupal in their projects, without having to depend on Drupal in its entirety.

DrupalCI Automatic Requeuing of Tests in the event of a CI Error

In the past, if the DrupalCI system encountered an error when attempting to run a test, the test would simply return a "CI error" message, and the user who submitted the test had to manually submit a new test. These errors would also cause the issues to be marked as 'Needs work' - potentially resetting the status of an otherwise RTBC issue.

We have updated Drupal.org's integration with DrupalCI so that instead of marking issues as needs work in the event of a CI Error, Drupal.org will instead automatically queue a retest.

Bugfix: Only retest one environment when running automatic RTBC retests

Finally, we've fixed a bug with the DrupalCI's automatic RTBC retest system. When Drupal HEAD changes, any RTBC patches are automatically retested to ensure that they still apply. It is only necessary to retest against the default or last-used test environment to ensure that the patch will work, but the automatic retests were being tested against every configured environment. We've fixed this issue, shortening queue times during a string of automatic retests and saving testing resources for the project.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects. In particular we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Categories: Blogs

myDropWizard.com: Drupal 6 security update for AES

Drupal Planet - 7 hours 8 min ago

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for the AES encryption module.

The AES module provides an API for encrypting and decrypting data via AES. It also allows storing Drupal passwords encrypted in the database (rather than hashed) which can allow site administrators with high enough permissions to view user passwords.

Previously, the module implemented AES poorly, such that the encryption was weakened and could have potentially made it easier for an attacker to decrypt given enough examples of the encrypted data.

(A note about the timing of this release: the AES module was unsupported on March 1st, and we started working on a fix right away in the D6LTS queue. We usually release D6LTS patches the same day the D7/D8 patches are posted or two weeks after a module is unsupported, however, in this case we had only a single Enterprise customer using AES and so we worked on it according to a timeline dictated by them, which involved testing their custom modules using the AES API with their team. So, we're releasing this after it's been fully tested and deployed for our one affected customer - if more customers had been affect it would have been released same-day, as usual.)

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the AES module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Categories: Blogs

Jeff Geerling's Blog: Drupal VM does Docker

Drupal Planet - 7 hours 41 min ago

Drupal VM has used Vagrant and (usually) VirtualBox to run Drupal infrastructure locally since its inception. But ever since Docker became 'the hot new thing' in infrastructure tooling, I've been asked when Drupal VM will convert to using Docker.

The answer to that question is a bit nuanced; Drupal VM has been using Docker to run its own integration tests for over a year (that's how I run tests on seven different OSes using Travis CI). And technically, Drupal VM's core components have always been able to run inside Docker containers (most of them use Docker-based integration tests as well).

But Docker usage was always an undocumented and unsupported feature of Drupal VM. But no longer—with 4.5.0, Drupal VM now supports Docker as an experimental alternative to Vagrant + VirtualBox, and you can use Drupal VM with Docker in one of two ways:

Categories: Blogs

James Oakley: Meet Project Honey Pot

Drupal Planet - 8 hours 39 min ago

Spam is a problem that never goes away. Email spam. Comment spam.

This site has long used Mollom to protect the comment section and the contact form. They're closing down on 2nd April 2018, so lots of webmasters will be looking for alternatives.

I'd like to introduce you to Project Honey Pot.

Blog Category: TechnologyDrupal Planet
Categories: Blogs

Flocon de toile | Freelance Drupal: Render programmatically a unique field from a node or an entity with Drupal 8

Drupal Planet - 11 hours 38 min ago
It may sometimes be necessary to render a single field of a content or entity. For example, for a simplified display of contents relating to the content consulted, the use of specific fields in other contexts, etc. Obtaining programmatically the rendering of a field may be problematic for the Drupal 8 cache invalidation system, since the resulting render array would not contain the cache tags of the source entity. Let's take a look at some solutions available to us.
Categories: Blogs

myDropWizard.com: Presentation: Docker & Drupal for local development

Drupal Planet - 18 hours 15 min ago

Last week, I presented on "Docker & Drupal for local development" at Drupal414, the local Drupal meetup in Milwaukee, WI.

It included:

  • a basic introduction to the why's and how's of Docker,
  • a couple live demos, and
  • the the details of how we use Docker as our local development environment to support & maintain hundreds of Drupal sites here at myDropWizard

The presentation wasn't recorded at the time, but it was so well received that I decided to record it again at my desk so I could share it with a wider audience. :-)

Here's the video:

Video of Docker & Drupal for Local Development

(Sorry, for the poor audio! This was recorded sort of spontaneously...)

And here are the slides.

Please leave any questions or comments in the comments section below!

Categories: Blogs

Drupal Bits at Web-Dev: Drupal View with Nodequeue Priority

Drupal Planet - May 23, 2017 - 11:11pm

Sometimes you want a View that follows the internal logic of the filters you set up on the View, but also can have some items hand selected or cultivated to the top of the View. Or perhaps the other way to describe it is a Nodequeue View that is backfilled with some other View based logic  so that you end up with a full display regardless of how many items are actually in the Nodequeue.

To do this requires three adjustments to the View (assuming you have already built the normal View logic based on filters that are separate from Nodequeue.

  1. Make the Nodequeue a relationship to the View.
  2. Add the Nodequeu to the sort criteria.
  3. Restructure the filter settings to make it the Nodequeue logic OR the Filter logic.
Example: Nodequeue View with random Backfill

Let's say you have a 3 item View that gets used to display some promoted items on your home page.  You want the View to be populated by anything in the Nodequeue and then randomly backfilled with any other item(s) that match some filter criteria if the Nodequeue does not contain three items.

0) To start, create your View that has a maximum of 3 items and set the filter(s) to use your backfil critera (a status of published and limited to whatever entities you are using) and a sort of Global: Random to randomly pick from items that meet the filter criteria.

1) Add your Nodequeue as a relationship.

You want to limited to a specific Nodequeue. The relationship should not be required, or you will not have anything to backfill with.

2) Add the Nodequeue as sort criteria to the View.

Since we want the Nodequeue items to come first, and in order we have to set the sort order in front of the rest of the View sort criteria (which in this case is random).

3) Adjust the filter criteria and break it into logical sections.  The first section is the set of filters that must be applied to all items regardless of whether they are in the Nodequeue or not. (the purple region below)

Then you need to create another filter group AND in this group put the items that are either the default logic OR the Nodequeue.  The default logic in this case is that audience field matches some criteria.  The trick is to set the operator within this filter group to OR.

Now when you add, delete or rearrange items in the Nodequeue the VIew will match the order of the Nodequeue and if you don't have enough items in your que, it will backfill from other items that meet your criteria.

 

Caching Issues:  By default, updating a nodequeue will not cause the cache on the View to expire if the View is cached.  If you need the updates to be immediately seen by anonymous users, you can implement a hook_nodequeue_update() to clear the cache.on any changes to that nodequeue.

Categories: Blogs

Drupal @ Penn State: Streamlining Polymer one-page apps in Drupal

Drupal Planet - May 23, 2017 - 7:04pm

On the ELMS:LN team, we’ve been working a lot with polymer and webcomponent based development this year. It’s our new workflow for all front-end development and we want Drupal to be the best platform for this type of development. At first, we made little elements and they were good. We stacked them together, and started integrating them into our user interfaces and polyfills made life happy.

Categories: Blogs

Chromatic: Chromatic's DrupalCon Baltimore Recap

Drupal Planet - May 23, 2017 - 3:00pm

As always, Chromatic had a great time at DrupalCon - we brought knowledge to share, and learned a lot!

Categories: Blogs

Dries Buytaert: Acquia's next phase

Drupal Planet - May 23, 2017 - 1:08pm

In 2007, Jay Batson and I wanted to build a software company based on open source and Drupal. I was 29 years old then, and eager to learn how to build a business that could change the world of software, strengthen the Drupal project and help drive the future of the web.

Tom Erickson joined Acquia's board of directors with an outstanding record of scaling and leading technology companies. About a year later, after a lot of convincing, Tom agreed to become our CEO. At the time, Acquia was 30 people strong and we were working out of a small office in Andover, Massachusetts. Nine years later, we can count 16 of the Fortune 100 among our customers, saw our staff grow from 30 to more than 750 employees, have more than $150MM in annual revenue, and have 14 offices across 7 countries. And, importantly, Acquia has also made an undeniable impact on Drupal, as we said we would.

I've been lucky to have had Tom as my business partner and I'm incredibly proud of what we have built together. He has been my friend, my business partner, and my professor. I learned first hand the complexities of growing an enterprise software company; from building a culture, to scaling a global team of employees, to making our customers successful.

Today is an important day in the evolution of Acquia:

  • Tom has decided it's time for him step down as CEO, allowing him flexibility with his personal time and act more as an advisor to companies, the role that brought him to Acquia in the first place.
  • We're going to search for a new CEO for Acquia. When we find that business partner, Tom will be stepping down as CEO. After the search is completed, Tom will remain on Acquia's Board of Directors, where he can continue to help advise and guide the company.
  • We are formalizing the working relationship I've had with Tom during the past 8 years by creating an Office of the CEO. I will focus on product strategy, product development, including product architecture and Acquia's roadmap; technology partnerships and acquisitions; and company-wide hiring and staffing allocations. Tom will focus on sales and marketing, customer success and G&A functions.

The time for these changes felt right to both of us. We spent the first decade of Acquia laying down the foundation of a solid business model for going out to the market and delivering customer success with Drupal – Tom's core strengths from his long career as a technology executive. Acquia's next phase will be focused on building confidently on this foundation with more product innovation, new technology acquisitions and more strategic partnerships – my core strengths as a technologist.

Tom is leaving Acquia in a great position. This past year, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the most significant positive move of all vendors in last year's Gartner Magic Quadrant for Web Content Management and that Forrester recognized Acquia as the leader for strategy and vision. We increasingly find ourselves at the center of our customer's technology and digital strategies. At a time when digital experiences means more than just web content management, and data and content intelligence play an increasing role in defining success for our customers, we are well positioned for the next phase of our growth.

I continue to love the work I do at Acquia each day. We have a passionate team of builders and dreamers, doers and makers. To the Acquia team around the world: 2017 will be a year of changes, but you have my commitment, in every way, to lead Acquia with clarity and focus.

To read Tom's thoughts on the transition, please check out his blog. Michael Skok, Acquia's lead investor, also covered it on his blog.

Categories: Blogs

TimOnWeb.com: Default Search API Sorts Per View in Drupal 7

Drupal Planet - May 23, 2017 - 12:54pm

It's been a while since I've written a post here (especially, Drupal-related). But today I have something interesting to share.

There's a module called Search API sorts (https://drupal.org/project/search_api_sorts) that provides custom sorts and a global sort block for Search API. The module itself is ok, but ...

Read now

Categories: Blogs

Vardot: Time to level up - Ditch Drupal 6 for the all new Drupal 8

Drupal Planet - May 23, 2017 - 12:10pm
News Read time: 4 minutes

Drupal 6 kicked off way back in 2008. For the time it was a major breakthrough in technology, and the platform supported many major websites including whitehouse.gov. Over its lifespan Drupal 6 had more than 700 contributed modules and 600 custom themes. It boasted a nicer menu structure and an easier installation process than its predecessors, as well as improved security and a handy drag and drop menu. Drupal 6 was well ahead of its time. Now it is unsupported, outdated and frankly, old. It’s time for you and your website to move on.

The complete history of Drupal

 

What’s new in Drupal?

Drupal 8 (released November 2015) comes with a whole set of new built-in gadgets, including mobile responsive themes, built in web services to make it an API-first CMS, improved editorial experience, accessibility, powerful multilingual tools (at last), improved performance, HTML5, and better SEO and analytics tools. With over 18 months since releasing, it has become reliably stable, secure, and ready for you to make the switch.

Check out our 7 Reasons why Now is the Right Time to Move to Drupal 8

 

Why Drupal 6 isn’t a safe bet anymore

Without support from the community, Drupal 6 is going to be opened to more and more security risks. It’s modules will become outdated and unwieldy, and users will struggle to be able to get the performance they’ve come to expect with modern websites. While upgrading may seem like a daunting task, the business risks of remaining with Drupal 6 are far higher.

 

Migrations - easier than you think?

 

Believe it or not, Drupal 8 is stacked full of migration modules and toolsets to help you move your content from one platform to another. While many of these focus simply on moving a site between completely different platform, there are some that are designed to assist with moving between versions of Drupal. Depending on how your website was developed these can be tricky to use, and can lead to many hours of rework ‘rebuilding’ your website at the other end. If your website is stacked full of custom features, you may find that stock migration modules don’t quite provide the service you need.

 

Partners in Migration

If you’re a tech-whizz with a small website and plenty of time, you might find migrating your site on your own an exciting and economically sound venture. However, Drupal has become such a user friendly platform that many of its users skillsets are in marketing, communications and social relations. If that’s you, perhaps the thought of trying to move all your web content to another platform is so daunting you’ve been carefully looking the other way while Drupal 8 was released and took the world by storm.

With our assistance, your migration can not only be smooth and painless, but an opportunity to resolve some of those niggling website issues, and take a step forward into greater customer engagement. A shift to Drupal 8 can help you improve your conversions whilst making site maintenance easier.

 

Vardot - Drupal experts since 2011

Here at Vardot we’ve been supporting people since 2011. With our specialist team of Drupal experts we’re prepared to help migrate anything from a small two-page website, to a large scale page with multiple custom modules and integrations. Working with our team you’ll be on first name basis with our staff, and there is no shuffling between departments.

We believe in empowering our customers and our community - by giving back to the open source community. We promote a vibrant culture that benefits everyone involved. Working with us goes hand in hand with giving back, and you can be sure we’ll equip you with the skills and knowledge you need for the day-to-day management of your website moving forward.

If you have a site that needs migrating, or just a refresh, get in touch with us, we can’t wait to hear from you.

Tags:  drupal 8 Drupal Planet Title:  Time to level up - Ditch Drupal 6 for the all new Drupal 8
Categories: Blogs

Drupal core announcements: Midwest Developer Summit 2017-08-11 -- 2017-08-13 -- Register Now!

Drupal Planet - May 23, 2017 - 11:35am

Make your plans to join us for the Drupal Midwest Developer Summit, August 11-13, on the University of Michigan campus, in Ann Arbor MI.

Register here

The Event
Join us for 3 days this summer in Ann Arbor, Michigan, for the 2017 Midwest Drupal Summit.
For this year’s Summit, we’ll gather on the beautiful University of Michigan campus for three days of code sprints, working on issues such as porting modules and writing, updating documentation and informal presentations. We will start around 10AM and finish around 5PM each day.
Food
Lunch, Coffee and Snacks will be provided each day.

What’s New This Year at MWDS?
This year, we’re adding lightning talks (more Drupal learnings!) and a social outing (more Drupal fun!)

What’s The Same?

Relaxed, low-key sprinting and socializing with Drupal core contributors and security team members.

What you can expect:

  • An opportunity to learn from Drupal core contributors and mentors, including Angie “webchick” Byron, Michael Hess, Peter Wolanin, Neil Drumm and xjm.
  • Code sprints. Let’s clear out some queues!
  • Help Porting modules to Drupal 8.
  • Lighting talks
  • Security issue sprints
  • Documentation writing
  • Good food and good community.

Location

Ann Arbor is about 30 minutes by car from Detroit Metro Airport. Ann Arbor is also served by Amtrak.
Questions? Contact mlhess@drupal.org

Categories: Blogs

2bits: Antibot module for Comment Spam, Alternative to Mollom End Of Life

Drupal Planet - May 23, 2017 - 9:14am

Acquia has announced the end of life for Mollom, the comment spam filtering service.

Mollom was created by Dries Buytaert and Benjamin Schrauwen, and launched to a few beta testers (including myself) in 2007. Mollom was acquired by Acquia in 2012.

The service worked generally well, with the occasional spam comment getting through. The stated reason for stopping the service is that spammers have gotten more sophisticated, and that perhaps means that Mollom needs to try harder to keep up with the ever changing tactics. Much like computer viruses and malware, spam (email or comments) is an arms race scenario.

The recommended alternative by Acquia is a combination of reCAPTCHA and Honeypot.

But there is a problem with this combinationa: reCAPTCHA, like all modules that depend on the CAPTCHA module, disable the page cache for any form that has CAPTCHA enabled.

This is due to this piece of code in captcha.module:

// Prevent caching of the page with CAPTCHA elements.
// This needs to be done even if the CAPTCHA will be ommitted later:
// other untrusted users should not get a cached page when
// the current untrusted user can skip the current CAPTCHA.
drupal_page_is_cacheable(FALSE);

Another alternative that we have been using that does not disable the page cache is antibot module.

To install the antibot module, you can use your git repository, or the following drush commands:

drush dis mollom
drush dl antibot
drush en antibot

Visit the configuration page for antibot if you want to add more forms that use the module, or disable it from other forms. The default settings work for comments, user registrations, and use logins.

Because of the above mentioned arms race situation, expect spammers to come up with circumvention techniques at some point in the future, and there will be a need to use other measures, be they in antibot, or other alternatives.

Contents: Tags: 
Categories: Blogs

Agiledrop.com Blog: AGILEDROP: DrupalCon session about Coding and Development

Drupal Planet - May 23, 2017 - 2:13am
Last time, we gathered together DrupalCon Baltimore sessions about Project Management. Before that, we explored Case Studies. We promised that we will also look in some other areas. Therefore, we will this time see, which sessions were present in the area of Coding and Development. Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else by Alanna Burke from Chromatic In this session, attendees learned both formatting standards for their code and documentation standards, as well as some specifics for things like Twig, and object-oriented programming in Drupal 8. The… READ MORE
Categories: Blogs

Drupal Modules: The One Percent: Drupal Modules: The One Percent — Footermap (video tutorial)

Drupal Planet - May 22, 2017 - 1:54pm
Drupal Modules: The One Percent — Footermap (video tutorial) NonProfit Mon, 05/22/2017 - 11:54 Episode 28

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Footermap, a module which renders the results expanded menus in a block.

Categories: Blogs

Valuebound: How to set the right expectations for project delivery?

Drupal Planet - May 22, 2017 - 1:46pm

Setting a clear list of expectation to the client for a project delivery goes a long way to great client relationships. A mismatched and misunderstood project goal and target always leads to dissatisfaction among team members, account head, and all other stakeholders.

I manage a team of a few developers who build web applications in Drupal. While working on projects with my team, I have had the chance to practice a few of the points that I have mentioned in the article. It has not only kept us on track but also kept people happy and motivated.

What should you do? Be involved from the beginning

When you begin a project makes sure that you and your team members are involved in the project from the beginning. There are times when the team would expand…

Categories: Blogs

MD Systems blog: Using Commerce for a newspaper subscription shop

Drupal Planet - May 22, 2017 - 1:09pm
Commerce 2.x has a lot of changes in comparison to Commerce 1x. In this blogpost, we write about how those changes affected us in our project and what we did to resolve certain problems that showed up as we used Commerce 2.x for a newspaper subscription shop.
Categories: Blogs

Appnovation Technologies: The Future of Drupal

Drupal Planet - May 22, 2017 - 1:04pm
The Future of Drupal *Cross-posted from Millwood Online.  Over the past month there has been a lot of focus on Drupal, the community. More recently it seems people are back to thinking about the software. Dave Hall and David Hernandez both posted eye opening posts with thoughts and ideas of what needs doing and how we can more forward. A one line summary of those posts would be "...
Categories: Blogs

Pages