Person killed in shooting outside Target store in the Polaris area
Police received a call reporting that a person had been shot at around 6:10 p.m. on July 9 at the Target store located at 1485 Polaris Parkway, according to a police dispatcher.
Responding officers found a victim inside a black sedan suffering from at least one gunshot wound. They were later pronounced dead at the scene by medics at 6:19 p.m., the dispatcher said.
Homicide Unit detectives have been called to scene, where at least one shell casing was recovered and window glass was found at the scene.
No information about suspects was immediately available.
This is a breaking news story. Check back at Dispatch.com for more updates.
Reporter Shahid Meighan can be reached at smeighan@dispatch.com, at ShahidMeighan on X, and at shahidthereporter.dispatch.com on Bluesky.
This article originally appeared on The Columbus Dispatch: Person killed in shooting at Target store in Polaris

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Vikings' Dallas Turner Facing Harsh Feedback After Being Victim to $240K Scam
Vikings' Dallas Turner Facing Harsh Feedback After Being Victim to $240K Scam originally appeared on Athlon Sports. After a quiet rookie season, Minnesota Vikings first-rounder Dallas Turner has made a name for himself in all the wrong ways before a hopeful breakout second season. Advertisement Turner made headlines on Tuesday, July 9, when the Minnesota Star Tribune broke news that the 22-year-old was a victim of a $240,000 banking fraud scheme. Turner provided a fraudster, impersonating a JP Morgan Chase banker, with a pair of $120,000 wire transfers after the fraudster told Turner someone had tried to impersonate him at a branch in Arizona. The news spread like wildfire on social media, leading to many fans who had not heard much of the 2024 No. 17 overall pick so far in his career to associate him only with being a victim of fraud and not his prospects on the field. Minnesota Vikings linebacker Dallas Turner (15) warms up during practice at Vikings training camp in Eagan, Becker-Imagn Images "Oh a rookie move," one fan wrote in response to a post on X (formerly Twitter) that garnered over 300,000 views and hundreds of comments. Advertisement "I thought this only happened to old dudes," a second fan said. While it was an honest mistake by Turner, it has called into question his learning capabilities, considering the Vikings offer three trainings on fraud prevention annually, according to the Star Tribune. "We drafted him for his play not his brains man (holy [expletive] Dallas we gotta be better man)," one fan added. While police have identified several suspects and are pursuing charges, the best step forward for Turner is to make everyone forget about the incident by putting his best foot forward on the field. Behind Pro Bowl duo Andrew Van Ginkel and Jonathan Greenard, Turner figures to play a rotational role this season as the No. 3 outside linebacker on the depth chart. Advertisement Related: NFL Stars React to Vikings, Andrew Van Ginkel Announcement Related: J.J. McCarthy, Vikings Dealt Damning 2025 Prediction This story was originally reported by Athlon Sports on Jul 9, 2025, where it first appeared.
Yahoo
an hour ago
- Yahoo
Update on Border Protection Beagle Who Was Kicked by Passenger Has People Emotional
Update on Border Protection Beagle Who Was Kicked by Passenger Has People Emotional originally appeared on Parade Pets. Canines serve vital roles in government. Many dogs are trained to detect dangerous weapons and substances working alongside police officers, transportation security, and border patrol. Breeds recruited for this type of investigative work include German Shepherds, Belgian Malinois, and Beagles. Freddie is a border protection Beagle who was recently mistreated by a traveler at Dulles International Airport. His handler, Customs and Border Protection Agriculture Specialist Melissa Snyder, recently shared an update on Freddie's condition. In an interview with CBS News, Snyder explained that on June 24, the 5-year-old Beagle alerted her to a bag that possibly held prohibited fruit and other food items in the baggage claim area. The luggage belonged to an Egyptian national named Hamed Aly Marie and his wife. "I was getting ready to ask [the wife] if I could open the bag," Snyder told CBS' Major Garrett. "The gentleman walked up and kicked Freddie in the right side of his ribs... No warning at all. Just walked right up to him... it was an intentional kick." The kick was so forceful it actually lifted Freddie off the ground. The Beagle was taken to a veterinary emergency room for treatment. Snyder revealed that Marie, age 70, was arrested by CBP and turned over to Homeland Security officers. "[Freddie] had bruising on his ribs," Snyder explained of the dog's injuries. "It scared him more than anything. He's going back to work tomorrow. He's just fine."Freddie is a rescue pup who's clocked in 22 months with CBP. He's been trained to find and identify travelers bringing in food items from overseas to protect against potential pests or blights that could prove detrimental to areas of the U.S. agricultural industry. The conscientious canine has already sniffed out 4,500 pounds of plant products and 3,800 pounds of meat – including 140 pounds of bushmeat from rats, snakes, camels and crocodiles. Freddie has received an outpouring of support and many advocating for stronger animal abuse laws. He's clearly good at his job — Marie's bag contained more than 100 pounds of agricultural items prohibited from entering the U.S., according to a statement from CBP. Marie pleaded guilty to a federal criminal count of malicious assault on a police animal and was sentenced to time served, and agreed to pay the $840 veterinarian fee. According to CBP, he flew back to Egypt on June 26. Thankfully, Freddie is doing fine and eager to be back on the job. "He thinks we're playing hide-and-seek and he loves to play hide-and-seek all day," Snyder said of how Freddie views his job. "To him it's the greatest game in the world, because he gets paid in treats." 🐶🐾🐾 Update on Border Protection Beagle Who Was Kicked by Passenger Has People Emotional first appeared on Parade Pets on Jul 10, 2025 This story was originally reported by Parade Pets on Jul 10, 2025, where it first appeared.


Forbes
2 hours ago
- Forbes
FBI Warning—You Should Never Reply To These Messages
FBI's AI warning is increasingly critical. Republished on July 10 with new report into AI deep fake attacks and advice for smartphone owners on staying safe as threats surge. The news that AI is being used to impersonate Secretary of State Marco Rubio and place calls to foreign ministers may be shocking, but it shouldn't be surprising. The FBI has warned such attacks are now underway and it will only get worse. As first reported by the Washington Post, the State Department has told U.S. diplomats that this latest attack has targeted at least three foreign ministers, a U.S. senator and a governor, using an AI generated voice to impersonate Rubio. A fake Signal account (Signal strikes again) was used to initiate contact through text and voice messages. It's clear that voice messages enable attackers to deploy AI fakes without the inherent risk in attempting to run them in real-time on a live call. The FBI is clear — do not respond to text or voice messages unless you can verify the sender. That means a voice message that sounds familiar cannot be trusted unless you can verify the actual number from which it has been sent. Do not reply until you can. Darktrace's AI and Strategy director Margaret Cunningham told me this is all too 'easy.' The attacks, while 'ultimately unsuccessful,' demonstrate 'just how easily generative AI can be used to launch credible, targeted social engineering attacks.' Alarmingly, Cunningham warns, 'this threat didn't fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability.' People make decisions 'while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.' And while the Rubio scam will generate plenty of headlines, the AI fakes warning has being doing the rounds for some months. It won't make those same headlines, but you're more likely to be targeted in your professional life through social engineering that exploits readily available social media connections and content to trick you. The FBI tells smartphone users: 'Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.' This is in addition to the broader advice given the plague of text message attacks now targeting American citizens. Check the details of any message. Delete any that are clear misrepresentations, such as fake tolls or DMV motoring offenses. Do not click any links contained in text messages — ever. And do not be afraid to hang up on the tech or customer support desk or bank or the law enforcement officer contacting you. You can then reach out to the relevant organization using publicly available contact details. ESET's Jake Moore warns 'cloning a voice can now take just minutes and the results are highly convincing when combined with social engineering. As the technology improves, the amount of audio needed to create a realistic clone also continues to shrink.' 'This impersonation is alarming and highlights just how sophisticated generative AI tools have become,' says Black Duck's Thomas Richards. 'It underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI.' As for the Rubio fakes, 'the State Department is aware of this incident and is currently monitoring and addressing the matter,' a spokesperson told reporters. 'The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cybersecurity posture to prevent future incidents.' 'AI-generated content has advanced to the point that it is often difficult to identify,' the bureau warns. 'When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.' With perfect timing, Trend Micro's latest report warns 'criminals can easily generate highly convincing deepfakes with very little budget, effort, and expertise, and deepfake generation tools will only become more affordable and more effective in the future.' The security team says this is being enabled by the same kinds of toolkits driving other types of frauds that have also triggered FBI warnings this year — including a variety of other message attacks. 'tools for creating deepfakes,' Trend Micro says, 'are now more powerful and more accessible by being cheaper and easier to use.' As warned by the FBI earlier in the year and with the latest Rubio impersonations that it has under investigation, deep fake voice technology is now easily deployed. 'The market for AI-generated voice technology is extremely mature,' Trend Micro says, citing several commercial applications, 'with numerous services offering voice cloning and studio-grade voiceovers… While 'these services have many legitimate applications, their potential for misuse cannot be overlooked.' After breaking the Rubio impersonations news, the Washington Post warns that 'In the absent of effective regulation in the United States, the responsibility to protect against voice impostors is mostly on you. The possibility of faked distressed calls is something to discuss with your family — along with whether setting up code words is overkill that will unnecessarily scare younger children in particular. Maybe you'll decide that setting up and practicing a code phrase is worth the peace of mind.' That idea of a secure code word that a friend or relative can use to provide they're real was pushed by the FBI some months ago. 'Create a secret word or phrase with your family to verify their identity,' it suggested in an AI attack advisory. 'Criminals can use AI-generated audio to impersonate well-known, public figures or personal relations to elicit payments,' the bureau warned in December. 'Criminals generate short audio clips containing a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.'